2025-03-17T17:40:19.5500150Z Current runner version: '2.322.0' 2025-03-17T17:40:19.5507540Z Runner name: 'i-0287a0cab9cae3fa7' 2025-03-17T17:40:19.5508488Z Runner group name: 'Default' 2025-03-17T17:40:19.5509450Z Machine name: 'ip-10-0-54-109' 2025-03-17T17:40:19.5514018Z ##[group]GITHUB_TOKEN Permissions 2025-03-17T17:40:19.5516837Z Actions: read 2025-03-17T17:40:19.5517532Z Attestations: read 2025-03-17T17:40:19.5518182Z Checks: read 2025-03-17T17:40:19.5518820Z Contents: read 2025-03-17T17:40:19.5519451Z Deployments: read 2025-03-17T17:40:19.5520078Z Discussions: read 2025-03-17T17:40:19.5520754Z Issues: read 2025-03-17T17:40:19.5521377Z Metadata: read 2025-03-17T17:40:19.5522010Z Packages: read 2025-03-17T17:40:19.5522712Z Pages: read 2025-03-17T17:40:19.5523347Z PullRequests: read 2025-03-17T17:40:19.5524037Z RepositoryProjects: read 2025-03-17T17:40:19.5524747Z SecurityEvents: read 2025-03-17T17:40:19.5525385Z Statuses: read 2025-03-17T17:40:19.5526049Z ##[endgroup] 2025-03-17T17:40:19.5529512Z Secret source: Actions 2025-03-17T17:40:19.5530524Z Prepare workflow directory 2025-03-17T17:40:19.8650918Z Prepare all required actions 2025-03-17T17:40:19.8694278Z Getting action download info 2025-03-17T17:40:20.0479737Z Download action repository 'pytorch/test-infra@main' (SHA:ff902e48707fd0cce5dbde20afb69d05b7d66870) 2025-03-17T17:40:21.4496877Z Download action repository 'pytorch/pytorch@main' (SHA:224cd9f055d7ebbd5cd417ba3054689b4ea7fde9) 2025-03-17T17:40:34.1605573Z Download action repository 'aws-actions/configure-aws-credentials@v3' (SHA:50ac8dd1e1b10d09dac7b8727528b91bed831ac0) 2025-03-17T17:40:34.3527805Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-03-17T17:40:34.6392264Z Getting action download info 2025-03-17T17:40:34.7480427Z Download action repository 'actions/checkout@v4' (SHA:11bd71901bbe5b1630ceea73d27597364c9af683) 2025-03-17T17:40:34.9883182Z Getting action download info 2025-03-17T17:40:35.1106542Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-03-17T17:40:35.2917094Z Getting action download info 2025-03-17T17:40:35.4148626Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2025-03-17T17:40:35.6118462Z Getting action download info 2025-03-17T17:40:35.7573285Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/pull/148585/merge (4c2bc68c957f2652a5ff3ab9ed69449972fbd9e1) 2025-03-17T17:40:35.7575380Z ##[group] Inputs 2025-03-17T17:40:35.7575766Z build-environment: linux-focal-py3.13-clang10 2025-03-17T17:40:35.7578632Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "crossref", "shard": 1, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "crossref", "shard": 2, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 1, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 2, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 3, "num_shards": 3, "runner": "linux.2xlarge"}]} 2025-03-17T17:40:35.7581831Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:40:35.7582629Z sync-tag: 2025-03-17T17:40:35.7583406Z timeout-minutes: 600 2025-03-17T17:40:35.7583698Z use-gha: 2025-03-17T17:40:35.7583950Z dashboard-tag: 2025-03-17T17:40:35.7584222Z s3-bucket: gha-artifacts 2025-03-17T17:40:35.7584525Z aws-role-to-assume: 2025-03-17T17:40:35.7585105Z disable-monitor: false 2025-03-17T17:40:35.7585668Z ##[endgroup] 2025-03-17T17:40:35.7586354Z Complete job name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:40:35.8022052Z A job started hook has been configured by the self-hosted runner administrator 2025-03-17T17:40:35.8125205Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2025-03-17T17:40:35.8133941Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:40:35.8134617Z ##[endgroup] 2025-03-17T17:40:36.9412164Z Runner Type: linux.2xlarge 2025-03-17T17:40:36.9412660Z Instance Type: c5.2xlarge 2025-03-17T17:40:36.9412979Z AMI Name: unknown 2025-03-17T17:40:36.9440438Z AMI ID: ami-08b5b3a93ed654d19 2025-03-17T17:40:42.1300282Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2025-03-17T17:40:42.1300762Z with: 2025-03-17T17:40:42.1301546Z github-secret: *** 2025-03-17T17:40:42.1302296Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-03-17T17:40:42.1303134Z activate-with-label: false 2025-03-17T17:40:42.1303424Z label: with-ssh 2025-03-17T17:40:42.1303677Z remove-existing-keys: true 2025-03-17T17:40:42.1303965Z fail-silently: true 2025-03-17T17:40:42.1304208Z env: 2025-03-17T17:40:42.1304423Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:40:42.1304694Z ##[endgroup] 2025-03-17T17:40:42.2442716Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2025-03-17T17:40:42.6503035Z Grabbing public ssh keys from https://github.com/fadara01.keys 2025-03-17T17:40:42.7130782Z ~/.ssh/authorized_keys file found on node, removing ~/.ssh and starting fresh 2025-03-17T17:40:42.7144329Z Public keys pulled and installed to /home/ec2-user/.ssh/authorized_keys 2025-03-17T17:40:42.7180626Z Login using: ssh ec2-user@ec2-18-212-235-63.compute-1.amazonaws.com 2025-03-17T17:40:42.7181684Z All testing is done inside the container, to start an interactive session run: 2025-03-17T17:40:42.7182659Z docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-03-17T17:40:42.7325075Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2025-03-17T17:40:42.7325555Z with: 2025-03-17T17:40:42.7325798Z no-sudo: true 2025-03-17T17:40:42.7326066Z submodules: recursive 2025-03-17T17:40:42.7326354Z fetch-depth: 0 2025-03-17T17:40:42.7326603Z env: 2025-03-17T17:40:42.7326840Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:40:42.7327116Z ##[endgroup] 2025-03-17T17:40:42.7430836Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:40:42.7431861Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:40:42.7440632Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:40:42.7441057Z env: 2025-03-17T17:40:42.7441312Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:40:42.7441578Z ##[endgroup] 2025-03-17T17:40:42.7530674Z ##[group]Run # Use all available CPUs for fetching 2025-03-17T17:40:42.7531125Z # Use all available CPUs for fetching 2025-03-17T17:40:42.7531490Z cd "${GITHUB_WORKSPACE}" 2025-03-17T17:40:42.7531844Z git config --global fetch.parallel 0 2025-03-17T17:40:42.7532249Z git config --global submodule.fetchJobs 0 2025-03-17T17:40:42.7532604Z  2025-03-17T17:40:42.7532984Z # Clean workspace. The default checkout action should also do this, but 2025-03-17T17:40:42.7533480Z # do it here as well just in case 2025-03-17T17:40:42.7533813Z if [[ -d .git ]]; then 2025-03-17T17:40:42.7534121Z  if [ -z "${NO_SUDO}" ]; then 2025-03-17T17:40:42.7534440Z  sudo git clean -ffdx 2025-03-17T17:40:42.7534732Z  else 2025-03-17T17:40:42.7534978Z  git clean -ffdx 2025-03-17T17:40:42.7535314Z  fi 2025-03-17T17:40:42.7535556Z fi 2025-03-17T17:40:42.7541248Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:40:42.7541662Z env: 2025-03-17T17:40:42.7541886Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:40:42.7542173Z NO_SUDO: true 2025-03-17T17:40:42.7542413Z ##[endgroup] 2025-03-17T17:40:42.7659473Z ##[group]Run actions/checkout@v4 2025-03-17T17:40:42.7659785Z with: 2025-03-17T17:40:42.7660054Z ref: 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:40:42.7660415Z fetch-depth: 0 2025-03-17T17:40:42.7660680Z submodules: recursive 2025-03-17T17:40:42.7660963Z show-progress: false 2025-03-17T17:40:42.7661256Z repository: pytorch/pytorch 2025-03-17T17:40:42.7661679Z token: *** 2025-03-17T17:40:42.7661935Z ssh-strict: true 2025-03-17T17:40:42.7662192Z ssh-user: git 2025-03-17T17:40:42.7662460Z persist-credentials: true 2025-03-17T17:40:42.7662757Z clean: true 2025-03-17T17:40:42.7663031Z sparse-checkout-cone-mode: true 2025-03-17T17:40:42.7663360Z fetch-tags: false 2025-03-17T17:40:42.7663618Z lfs: false 2025-03-17T17:40:42.7663886Z set-safe-directory: true 2025-03-17T17:40:42.7664174Z env: 2025-03-17T17:40:42.7664397Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:40:42.7664682Z ##[endgroup] 2025-03-17T17:40:42.8755995Z Syncing repository: pytorch/pytorch 2025-03-17T17:40:42.8757345Z ##[group]Getting Git version info 2025-03-17T17:40:42.8757870Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-03-17T17:40:42.8758598Z [command]/usr/bin/git version 2025-03-17T17:40:42.8758916Z git version 2.47.1 2025-03-17T17:40:42.8767976Z ##[endgroup] 2025-03-17T17:40:42.8776358Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/d7752cd7-40a7-4406-95fa-0e1f67b475a7/.gitconfig' 2025-03-17T17:40:42.8794048Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/d7752cd7-40a7-4406-95fa-0e1f67b475a7' before making global git config changes 2025-03-17T17:40:42.8795096Z Adding repository directory to the temporary git global config as a safe directory 2025-03-17T17:40:42.8798307Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-03-17T17:40:42.8832710Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-03-17T17:40:42.8835496Z ##[group]Initializing the repository 2025-03-17T17:40:42.8839467Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-03-17T17:40:42.8866606Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-03-17T17:40:42.8867769Z hint: is subject to change. To configure the initial branch name to use in all 2025-03-17T17:40:42.8868834Z hint: of your new repositories, which will suppress this warning, call: 2025-03-17T17:40:42.8869592Z hint: 2025-03-17T17:40:42.8870213Z hint: git config --global init.defaultBranch 2025-03-17T17:40:42.8870867Z hint: 2025-03-17T17:40:42.8871518Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-03-17T17:40:42.8872614Z hint: 'development'. The just-created branch can be renamed via this command: 2025-03-17T17:40:42.8873440Z hint: 2025-03-17T17:40:42.8873882Z hint: git branch -m 2025-03-17T17:40:42.8874753Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2025-03-17T17:40:42.8879468Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2025-03-17T17:40:42.8903143Z ##[endgroup] 2025-03-17T17:40:42.8903958Z ##[group]Disabling automatic garbage collection 2025-03-17T17:40:42.8907690Z [command]/usr/bin/git config --local gc.auto 0 2025-03-17T17:40:42.8930252Z ##[endgroup] 2025-03-17T17:40:42.8931011Z ##[group]Setting up auth 2025-03-17T17:40:42.8937246Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-03-17T17:40:42.8962473Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-03-17T17:40:42.9207990Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-03-17T17:40:42.9232129Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-03-17T17:40:42.9474957Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-03-17T17:40:42.9516007Z ##[endgroup] 2025-03-17T17:40:42.9523809Z ##[group]Fetching the repository 2025-03-17T17:40:42.9525239Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-03-17T17:41:35.7863532Z From https://github.com/pytorch/pytorch 2025-03-17T17:41:35.7864484Z * [new branch] 2.1-dynamic-doc -> origin/2.1-dynamic-doc 2025-03-17T17:41:35.7865602Z * [new branch] 2.6.0.dev20241004+ -> origin/2.6.0.dev20241004+ 2025-03-17T17:41:35.7866785Z * [new branch] 20250219_e8m0_intermediate -> origin/20250219_e8m0_intermediate 2025-03-17T17:41:35.7867821Z * [new branch] 20250219_test -> origin/20250219_test 2025-03-17T17:41:35.7869374Z * [new branch] Adjust-Description-for-linux-binary-test-Workflow -> origin/Adjust-Description-for-linux-binary-test-Workflow 2025-03-17T17:41:35.7871039Z * [new branch] Chillee-patch-5 -> origin/Chillee-patch-5 2025-03-17T17:41:35.7872142Z * [new branch] Flamefire-patch-1 -> origin/Flamefire-patch-1 2025-03-17T17:41:35.7873351Z * [new branch] HDCharles-2.6.0-release-notes -> origin/HDCharles-2.6.0-release-notes 2025-03-17T17:41:35.7874764Z * [new branch] JackCaoG/add_new_lazy_counter_macro -> origin/JackCaoG/add_new_lazy_counter_macro 2025-03-17T17:41:35.7876713Z * [new branch] JackCaoG/dynamo_make_fx_non_core_aten_ops -> origin/JackCaoG/dynamo_make_fx_non_core_aten_ops 2025-03-17T17:41:35.7877579Z * [new branch] JackCaoG/update_dynamo_doc -> origin/JackCaoG/update_dynamo_doc 2025-03-17T17:41:35.7878852Z * [new branch] JackCaoG/update_xla_pin_to_skip_test -> origin/JackCaoG/update_xla_pin_to_skip_test 2025-03-17T17:41:35.7880409Z * [new branch] JackCaoG/update_xla_pin_to_skip_test2 -> origin/JackCaoG/update_xla_pin_to_skip_test2 2025-03-17T17:41:35.7881644Z * [new branch] NicolasHug-patch-2 -> origin/NicolasHug-patch-2 2025-03-17T17:41:35.7883093Z * [new branch] PR-AOTInductorNoneBug -> origin/PR-AOTInductorNoneBug 2025-03-17T17:41:35.7884658Z * [new branch] PR-AOTInductorNoneBugFix -> origin/PR-AOTInductorNoneBugFix 2025-03-17T17:41:35.7886291Z * [new branch] PR-FixConfigsIssue -> origin/PR-FixConfigsIssue 2025-03-17T17:41:35.7887399Z * [new branch] PR-NoneBugFix-viable -> origin/PR-NoneBugFix-viable 2025-03-17T17:41:35.7888838Z * [new branch] PR-ResetToZero -> origin/PR-ResetToZero 2025-03-17T17:41:35.7890197Z * [new branch] Remove-linux_t4g_2xlarge-Usage -> origin/Remove-linux_t4g_2xlarge-Usage 2025-03-17T17:41:35.7891331Z * [new branch] Revert-PR-110949 -> origin/Revert-PR-110949 2025-03-17T17:41:35.7892793Z * [new branch] Update-Flash-Packaging -> origin/Update-Flash-Packaging 2025-03-17T17:41:35.7894569Z * [new branch] Valentine/flash_attention_bf16 -> origin/Valentine/flash_attention_bf16 2025-03-17T17:41:35.7896356Z * [new branch] abock/onnx-1.15.0-validation -> origin/abock/onnx-1.15.0-validation 2025-03-17T17:41:35.7897784Z * [new branch] abock/ort-nightly==1.16.0.dev20230908001 -> origin/abock/ort-nightly==1.16.0.dev20230908001 2025-03-17T17:41:35.7899014Z * [new branch] add-android-build-workflow -> origin/add-android-build-workflow 2025-03-17T17:41:35.7900244Z * [new branch] add-assign -> origin/add-assign 2025-03-17T17:41:35.7901974Z * [new branch] add_broadcast_functional_collective -> origin/add_broadcast_functional_collective 2025-03-17T17:41:35.7903217Z * [new branch] add_from_group_doc_and_test -> origin/add_from_group_doc_and_test 2025-03-17T17:41:35.7904460Z * [new branch] add_mha_to_autocast_policy -> origin/add_mha_to_autocast_policy 2025-03-17T17:41:35.7905901Z * [new branch] add_non_parallel_model_comparison -> origin/add_non_parallel_model_comparison 2025-03-17T17:41:35.7907124Z * [new branch] add_test_to_show_view_gap -> origin/add_test_to_show_view_gap 2025-03-17T17:41:35.7908622Z * [new branch] add_windows_testing_back -> origin/add_windows_testing_back 2025-03-17T17:41:35.7909838Z * [new branch] addmm-heuristic -> origin/addmm-heuristic 2025-03-17T17:41:35.7911203Z * [new branch] addsimde -> origin/addsimde 2025-03-17T17:41:35.7913373Z * [new branch] adi/gemm_bf16f32 -> origin/adi/gemm_bf16f32 2025-03-17T17:41:35.7914599Z * [new branch] ah-globalfeedback-hook -> origin/ah-globalfeedback-hook 2025-03-17T17:41:35.7916263Z * [new branch] alanwaketan/pin2 -> origin/alanwaketan/pin2 2025-03-17T17:41:35.7917555Z * [new branch] albanD-patch-1 -> origin/albanD-patch-1 2025-03-17T17:41:35.7918877Z * [new branch] albanD-patch-2 -> origin/albanD-patch-2 2025-03-17T17:41:35.7920169Z * [new branch] alt-disable -> origin/alt-disable 2025-03-17T17:41:35.7922408Z * [new branch] angelayi/aot_inductor_bench_comp_time -> origin/angelayi/aot_inductor_bench_comp_time 2025-03-17T17:41:35.7923758Z * [new branch] angelayi/aot_inductor_benchmark -> origin/angelayi/aot_inductor_benchmark 2025-03-17T17:41:35.7925146Z * [new branch] angelayi/aot_inductor_torch -> origin/angelayi/aot_inductor_torch 2025-03-17T17:41:35.7926355Z * [new branch] angelayi/aoti_additional_files -> origin/angelayi/aoti_additional_files 2025-03-17T17:41:35.7927548Z * [new branch] angelayi/aotinductor_const -> origin/angelayi/aotinductor_const 2025-03-17T17:41:35.7929215Z * [new branch] angelayi/aotinductor_const_name -> origin/angelayi/aotinductor_const_name 2025-03-17T17:41:35.7930735Z * [new branch] angelayi/attr_proxy -> origin/angelayi/attr_proxy 2025-03-17T17:41:35.7932114Z * [new branch] angelayi/benchmark_skip -> origin/angelayi/benchmark_skip 2025-03-17T17:41:35.7933404Z * [new branch] angelayi/bincount -> origin/angelayi/bincount 2025-03-17T17:41:35.7934951Z * [new branch] angelayi/change_pytree_serialization -> origin/angelayi/change_pytree_serialization 2025-03-17T17:41:35.7936267Z * [new branch] angelayi/constraint -> origin/angelayi/constraint 2025-03-17T17:41:35.7937784Z * [new branch] angelayi/cp107981 -> origin/angelayi/cp107981 2025-03-17T17:41:35.7939091Z * [new branch] angelayi/cpp_loader -> origin/angelayi/cpp_loader 2025-03-17T17:41:35.7940374Z * [new branch] angelayi/customop -> origin/angelayi/customop 2025-03-17T17:41:35.7941729Z * [new branch] angelayi/default_serialized -> origin/angelayi/default_serialized 2025-03-17T17:41:35.7943048Z * [new branch] angelayi/distribby -> origin/angelayi/distribby 2025-03-17T17:41:35.7944400Z * [new branch] angelayi/distribution -> origin/angelayi/distribution 2025-03-17T17:41:35.7945708Z * [new branch] angelayi/docs -> origin/angelayi/docs 2025-03-17T17:41:35.7947241Z * [new branch] angelayi/draft_logger -> origin/angelayi/draft_logger 2025-03-17T17:41:35.7949034Z * [new branch] angelayi/embed_constants -> origin/angelayi/embed_constants 2025-03-17T17:41:35.7950556Z * [new branch] angelayi/export_custom_op_rst -> origin/angelayi/export_custom_op_rst 2025-03-17T17:41:35.7951760Z * [new branch] angelayi/fail_models_temp -> origin/angelayi/fail_models_temp 2025-03-17T17:41:35.7952833Z * [new branch] angelayi/fake -> origin/angelayi/fake 2025-03-17T17:41:35.7954117Z * [new branch] angelayi/fix3 -> origin/angelayi/fix3 2025-03-17T17:41:35.7955603Z * [new branch] angelayi/kwarg_input -> origin/angelayi/kwarg_input 2025-03-17T17:41:35.7956976Z * [new branch] angelayi/logging.bak -> origin/angelayi/logging.bak 2025-03-17T17:41:35.7958240Z * [new branch] angelayi/logging2 -> origin/angelayi/logging2 2025-03-17T17:41:35.7959589Z * [new branch] angelayi/no_so_weight -> origin/angelayi/no_so_weight 2025-03-17T17:41:35.7960870Z * [new branch] angelayi/provenance_id -> origin/angelayi/provenance_id 2025-03-17T17:41:35.7962134Z * [new branch] angelayi/pytree2 -> origin/angelayi/pytree2 2025-03-17T17:41:35.7963605Z * [new branch] angelayi/pytree_namedtuple -> origin/angelayi/pytree_namedtuple 2025-03-17T17:41:35.7964959Z * [new branch] angelayi/register_dataclass -> origin/angelayi/register_dataclass 2025-03-17T17:41:35.7966274Z * [new branch] angelayi/remove_aoti_unlift -> origin/angelayi/remove_aoti_unlift 2025-03-17T17:41:35.7967568Z * [new branch] angelayi/shape -> origin/angelayi/shape 2025-03-17T17:41:35.7969085Z * [new branch] angelayi/symint_input -> origin/angelayi/symint_input 2025-03-17T17:41:35.7970295Z * [new branch] angelayi/test113041 -> origin/angelayi/test113041 2025-03-17T17:41:35.7971810Z * [new branch] angelayi/torch_size -> origin/angelayi/torch_size 2025-03-17T17:41:35.7972922Z * [new branch] angelayi/trace_refactor -> origin/angelayi/trace_refactor 2025-03-17T17:41:35.7974272Z * [new branch] angelayi/transpose_ -> origin/angelayi/transpose_ 2025-03-17T17:41:35.7975643Z * [new branch] angelayi/update_schema_msg -> origin/angelayi/update_schema_msg 2025-03-17T17:41:35.7977137Z * [new branch] atalman-inductor-perf-cu124 -> origin/atalman-inductor-perf-cu124 2025-03-17T17:41:35.7978470Z * [new branch] atalman-inductor-perf-cu124.1 -> origin/atalman-inductor-perf-cu124.1 2025-03-17T17:41:35.7979679Z * [new branch] atalman-patch-1 -> origin/atalman-patch-1 2025-03-17T17:41:35.7981196Z * [new branch] atalman-patch-10 -> origin/atalman-patch-10 2025-03-17T17:41:35.7982499Z * [new branch] atalman-patch-2 -> origin/atalman-patch-2 2025-03-17T17:41:35.7983944Z * [new branch] atalman-patch-3 -> origin/atalman-patch-3 2025-03-17T17:41:35.7985303Z * [new branch] atalman-patch-5 -> origin/atalman-patch-5 2025-03-17T17:41:35.7986770Z * [new branch] atalman-patch-6 -> origin/atalman-patch-6 2025-03-17T17:41:35.7988220Z * [new branch] atalman-patch-7 -> origin/atalman-patch-7 2025-03-17T17:41:35.7989562Z * [new branch] atalman-patch-8 -> origin/atalman-patch-8 2025-03-17T17:41:35.7990946Z * [new branch] atalman-patch-9 -> origin/atalman-patch-9 2025-03-17T17:41:35.7992418Z * [new branch] atalman_inductor_2.3.0 -> origin/atalman_inductor_2.3.0 2025-03-17T17:41:35.7993712Z * [new branch] atalman_inductor_2.3.1 -> origin/atalman_inductor_2.3.1 2025-03-17T17:41:35.7994937Z * [new branch] atalman_inductor_2.4.0 -> origin/atalman_inductor_2.4.0 2025-03-17T17:41:35.7996385Z * [new branch] atalman_inductor_2.4.x -> origin/atalman_inductor_2.4.x 2025-03-17T17:41:35.7997641Z * [new branch] avoid_record_ag_rs -> origin/avoid_record_ag_rs 2025-03-17T17:41:35.7999401Z * [new branch] bahuang/make_fallback -> origin/bahuang/make_fallback 2025-03-17T17:41:35.8001107Z * [new branch] base/1.5 -> origin/base/1.5 2025-03-17T17:41:35.8002593Z * [new branch] base_inductor_opt_flag -> origin/base_inductor_opt_flag 2025-03-17T17:41:35.8004173Z * [new branch] batching_sdpa_efficient_attention -> origin/batching_sdpa_efficient_attention 2025-03-17T17:41:35.8005744Z * [new branch] benchmark-updates -> origin/benchmark-updates 2025-03-17T17:41:35.8007333Z * [new branch] bertmaher/pinbump26 -> origin/bertmaher/pinbump26 2025-03-17T17:41:35.8009689Z * [new branch] bertrand/cutlass -> origin/bertrand/cutlass 2025-03-17T17:41:35.8011579Z * [new branch] bf/cg-disable-tts-angular -> origin/bf/cg-disable-tts-angular 2025-03-17T17:41:35.8012915Z * [new branch] bf/cg-partition -> origin/bf/cg-partition 2025-03-17T17:41:35.8014352Z * [new branch] bf/cg-prototype -> origin/bf/cg-prototype 2025-03-17T17:41:35.8015813Z * [new branch] bf/cg-remove-check -> origin/bf/cg-remove-check 2025-03-17T17:41:35.8017425Z * [new branch] bf/cg-skip-unbacked-symint-msg -> origin/bf/cg-skip-unbacked-symint-msg 2025-03-17T17:41:35.8018939Z * [new branch] bf/cudagraph -> origin/bf/cudagraph 2025-03-17T17:41:35.8020651Z * [new branch] bf/cudagraph-disable-input-mutation -> origin/bf/cudagraph-disable-input-mutation 2025-03-17T17:41:35.8022520Z * [new branch] bf/cudagraph-enable-input-mutation-support-benchmark -> origin/bf/cudagraph-enable-input-mutation-support-benchmark 2025-03-17T17:41:35.8024380Z * [new branch] bf/cudagraph-partition -> origin/bf/cudagraph-partition 2025-03-17T17:41:35.8025504Z * [new branch] bf/donated-buffer-bench -> origin/bf/donated-buffer-bench 2025-03-17T17:41:35.8026672Z * [new branch] bf/fa-embedding-16 -> origin/bf/fa-embedding-16 2025-03-17T17:41:35.8027746Z * [new branch] bf/fd-non-one-num-head -> origin/bf/fd-non-one-num-head 2025-03-17T17:41:35.8028859Z * [new branch] bf/padded-tensor-init -> origin/bf/padded-tensor-init 2025-03-17T17:41:35.8030372Z * [new branch] bf/reduce-scatter-copy-in -> origin/bf/reduce-scatter-copy-in 2025-03-17T17:41:35.8031634Z * [new branch] bf/remove-check-55b0c39d -> origin/bf/remove-check-55b0c39d 2025-03-17T17:41:35.8032805Z * [new branch] bisect_perf_hf_T5_3acc6eac492 -> origin/bisect_perf_hf_T5_3acc6eac492 2025-03-17T17:41:35.8034196Z * [new branch] bisect_perf_hf_T5_3fcf66f61fb -> origin/bisect_perf_hf_T5_3fcf66f61fb 2025-03-17T17:41:35.8035401Z * [new branch] bisect_perf_hf_T5_4009d154129 -> origin/bisect_perf_hf_T5_4009d154129 2025-03-17T17:41:35.8036729Z * [new branch] bisect_perf_hf_T5_40d0740e73d -> origin/bisect_perf_hf_T5_40d0740e73d 2025-03-17T17:41:35.8038120Z * [new branch] bisect_perf_hf_T5_5268754e -> origin/bisect_perf_hf_T5_5268754e 2025-03-17T17:41:35.8039396Z * [new branch] bisect_perf_hf_T5_7d89a8d385c -> origin/bisect_perf_hf_T5_7d89a8d385c 2025-03-17T17:41:35.8040644Z * [new branch] bisect_perf_hf_T5_b7a25c1ee7c -> origin/bisect_perf_hf_T5_b7a25c1ee7c 2025-03-17T17:41:35.8041881Z * [new branch] bisect_perf_hf_T5_c25b201583f -> origin/bisect_perf_hf_T5_c25b201583f 2025-03-17T17:41:35.8043185Z * [new branch] bisect_perf_hf_T5_c93e57efac0 -> origin/bisect_perf_hf_T5_c93e57efac0 2025-03-17T17:41:35.8044378Z * [new branch] bisect_perf_hf_T5_ca9813ea149 -> origin/bisect_perf_hf_T5_ca9813ea149 2025-03-17T17:41:35.8045779Z * [new branch] bisect_perf_hf_T5_d65f194a -> origin/bisect_perf_hf_T5_d65f194a 2025-03-17T17:41:35.8046952Z * [new branch] bisect_perf_hf_T5_da94ab0b -> origin/bisect_perf_hf_T5_da94ab0b 2025-03-17T17:41:35.8048437Z * [new branch] bisect_perf_hf_T5_da94ab0b_new -> origin/bisect_perf_hf_T5_da94ab0b_new 2025-03-17T17:41:35.8049677Z * [new branch] bisect_perf_hf_T5_db4e8a1d8a8 -> origin/bisect_perf_hf_T5_db4e8a1d8a8 2025-03-17T17:41:35.8051173Z * [new branch] bisect_perf_hf_T5_e0d97e936a2 -> origin/bisect_perf_hf_T5_e0d97e936a2 2025-03-17T17:41:35.8052427Z * [new branch] bisect_perf_hf_T5_f23621ec563 -> origin/bisect_perf_hf_T5_f23621ec563 2025-03-17T17:41:35.8054211Z * [new branch] bowbao/beartype_fix_2.1.1 -> origin/bowbao/beartype_fix_2.1.1 2025-03-17T17:41:35.8055694Z * [new branch] bowbao/bench_updates -> origin/bowbao/bench_updates 2025-03-17T17:41:35.8057223Z * [new branch] bowbao/bench_updates_stage -> origin/bowbao/bench_updates_stage 2025-03-17T17:41:35.8058635Z * [new branch] bowbao/benchmark_test_data -> origin/bowbao/benchmark_test_data 2025-03-17T17:41:35.8060093Z * [new branch] bowbao/dort_rewriter -> origin/bowbao/dort_rewriter 2025-03-17T17:41:35.8061545Z * [new branch] bowbao/skip_decomp -> origin/bowbao/skip_decomp 2025-03-17T17:41:35.8062835Z * [new branch] bowbao/wip_prs -> origin/bowbao/wip_prs 2025-03-17T17:41:35.8065211Z * [new branch] brenocfg/fix-meta-opinfo -> origin/brenocfg/fix-meta-opinfo 2025-03-17T17:41:35.8067166Z * [new branch] brister/3d_permute_block_ptr -> origin/brister/3d_permute_block_ptr 2025-03-17T17:41:35.8068682Z * [new branch] brister/always_tiled_reduction -> origin/brister/always_tiled_reduction 2025-03-17T17:41:35.8070314Z * [new branch] brister/doc_bucketize -> origin/brister/doc_bucketize 2025-03-17T17:41:35.8071489Z * [new branch] brister/loop_order -> origin/brister/loop_order 2025-03-17T17:41:35.8073207Z * [new branch] brister/tiled_reduction_no_numel_check -> origin/brister/tiled_reduction_no_numel_check 2025-03-17T17:41:35.8074555Z * [new branch] brister/wrapper_ir -> origin/brister/wrapper_ir 2025-03-17T17:41:35.8075830Z * [new branch] ca_0431d47eaa -> origin/ca_0431d47eaa 2025-03-17T17:41:35.8077242Z * [new branch] ca_fix_0431d47eaa -> origin/ca_fix_0431d47eaa 2025-03-17T17:41:35.8078817Z * [new branch] cherry-pick-111576 -> origin/cherry-pick-111576 2025-03-17T17:41:35.8080777Z * [new branch] cherry-pick-148663-by-pytorch_bot_bot_ -> origin/cherry-pick-148663-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8082462Z * [new branch] cherry-pick-149025-by-pytorch_bot_bot_ -> origin/cherry-pick-149025-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8084056Z * [new branch] cherry-pick-149033-by-pytorch_bot_bot_ -> origin/cherry-pick-149033-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8085643Z * [new branch] cherry-pick-149082-by-pytorch_bot_bot_ -> origin/cherry-pick-149082-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8087466Z * [new branch] cherry-pick-149092-by-pytorch_bot_bot_ -> origin/cherry-pick-149092-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8089108Z * [new branch] cherry-pick-149175-by-pytorch_bot_bot_ -> origin/cherry-pick-149175-by-pytorch_bot_bot_ 2025-03-17T17:41:35.8090331Z * [new branch] ci_pin -> origin/ci_pin 2025-03-17T17:41:35.8091675Z * [new branch] ckluk2-compileThread-1 -> origin/ckluk2-compileThread-1 2025-03-17T17:41:35.8092915Z * [new branch] ckluk2-compileThread-2 -> origin/ckluk2-compileThread-2 2025-03-17T17:41:35.8094284Z * [new branch] ckluk2-compileThread-64 -> origin/ckluk2-compileThread-64 2025-03-17T17:41:35.8095431Z * [new branch] ckluk2-test-1 -> origin/ckluk2-test-1 2025-03-17T17:41:35.8097225Z * [new branch] compile_fsdp2_disable_stream_and_event -> origin/compile_fsdp2_disable_stream_and_event 2025-03-17T17:41:35.8098995Z * [new branch] condition-branch-in-debug-handler -> origin/condition-branch-in-debug-handler 2025-03-17T17:41:35.8100215Z * [new branch] consolidate-is-qat -> origin/consolidate-is-qat 2025-03-17T17:41:35.8101530Z * [new branch] copy_graph -> origin/copy_graph 2025-03-17T17:41:35.8103316Z * [new branch] cpio/fix_new_ami_tests -> origin/cpio/fix_new_ami_tests 2025-03-17T17:41:35.8104698Z * [new branch] cpio/fix_unit_test -> origin/cpio/fix_unit_test 2025-03-17T17:41:35.8106150Z * [new branch] cse-source -> origin/cse-source 2025-03-17T17:41:35.8108609Z * [new branch] csl/3proc -> origin/csl/3proc 2025-03-17T17:41:35.8109913Z * [new branch] csl/always_produce_xml -> origin/csl/always_produce_xml 2025-03-17T17:41:35.8111405Z * [new branch] csl/build_experiment_max_jobs -> origin/csl/build_experiment_max_jobs 2025-03-17T17:41:35.8112978Z * [new branch] csl/build_test_more_procs -> origin/csl/build_test_more_procs 2025-03-17T17:41:35.8142170Z * [new branch] csl/build_test_more_procs2 -> origin/csl/build_test_more_procs2 2025-03-17T17:41:35.8143347Z * [new branch] csl/checkout_more_procs -> origin/csl/checkout_more_procs 2025-03-17T17:41:35.8144416Z * [new branch] csl/cutlass_bazel -> origin/csl/cutlass_bazel 2025-03-17T17:41:35.8145470Z * [new branch] csl/disableautotune -> origin/csl/disableautotune 2025-03-17T17:41:35.8147110Z * [new branch] csl/fix_rerun_disabled_tests_upload -> origin/csl/fix_rerun_disabled_tests_upload 2025-03-17T17:41:35.8148467Z * [new branch] csl/inductortest_max_autotune -> origin/csl/inductortest_max_autotune 2025-03-17T17:41:35.8149558Z * [new branch] csl/lint_dockerimg -> origin/csl/lint_dockerimg 2025-03-17T17:41:35.8150517Z * [new branch] csl/logchanges -> origin/csl/logchanges 2025-03-17T17:41:35.8151492Z * [new branch] csl/mps_sharding -> origin/csl/mps_sharding 2025-03-17T17:41:35.8152495Z * [new branch] csl/multigpufix -> origin/csl/multigpufix 2025-03-17T17:41:35.8153530Z * [new branch] csl/no_clean_workspace -> origin/csl/no_clean_workspace 2025-03-17T17:41:35.8154520Z * [new branch] csl/no_conda_cmake -> origin/csl/no_conda_cmake 2025-03-17T17:41:35.8155415Z * [new branch] csl/numpy222 -> origin/csl/numpy222 2025-03-17T17:41:35.8156408Z * [new branch] csl/pytest_timeout -> origin/csl/pytest_timeout 2025-03-17T17:41:35.8157598Z * [new branch] csl/rerun_disabled_tests_print_log -> origin/csl/rerun_disabled_tests_print_log 2025-03-17T17:41:35.8158759Z * [new branch] csl/revert -> origin/csl/revert 2025-03-17T17:41:35.8159752Z * [new branch] csl/sharding_build_env -> origin/csl/sharding_build_env 2025-03-17T17:41:35.8160793Z * [new branch] csl/slowtesttimeout -> origin/csl/slowtesttimeout 2025-03-17T17:41:35.8161837Z * [new branch] csl/some_super_setup -> origin/csl/some_super_setup 2025-03-17T17:41:35.8162847Z * [new branch] csl/stdmakeunique -> origin/csl/stdmakeunique 2025-03-17T17:41:35.8163938Z * [new branch] csl/sudo_clean_workspace -> origin/csl/sudo_clean_workspace 2025-03-17T17:41:35.8165092Z * [new branch] csl/td_test_cpp_extensions -> origin/csl/td_test_cpp_extensions 2025-03-17T17:41:35.8166322Z * [new branch] csl/tensoboardpip -> origin/csl/tensoboardpip 2025-03-17T17:41:35.8167419Z * [new branch] csl/test_checkout_git_changes -> origin/csl/test_checkout_git_changes 2025-03-17T17:41:35.8168479Z * [new branch] csl/trymerge_flush -> origin/csl/trymerge_flush 2025-03-17T17:41:35.8169691Z * [new branch] csl/trymerge_initial_comment_stack -> origin/csl/trymerge_initial_comment_stack 2025-03-17T17:41:35.8171095Z * [new branch] csl/update_gh_runners_ubuntu2004 -> origin/csl/update_gh_runners_ubuntu2004 2025-03-17T17:41:35.8172201Z * [new branch] csl/use_ninja -> origin/csl/use_ninja 2025-03-17T17:41:35.8173118Z * [new branch] csl/windowsbat -> origin/csl/windowsbat 2025-03-17T17:41:35.8174127Z * [new branch] cudnnsdparefactor -> origin/cudnnsdparefactor 2025-03-17T17:41:35.8175273Z * [new branch] cutlass-template-fix-rocm -> origin/cutlass-template-fix-rocm 2025-03-17T17:41:35.8176352Z * [new branch] d4l3k/fsdp_wait -> origin/d4l3k/fsdp_wait 2025-03-17T17:41:35.8177385Z * [new branch] danthe3rd-patch-1 -> origin/danthe3rd-patch-1 2025-03-17T17:41:35.8178469Z * [new branch] daxia6/fix/doc_string -> origin/daxia6/fix/doc_string 2025-03-17T17:41:35.8179612Z * [new branch] desertfire/test_cpp_wrapper -> origin/desertfire/test_cpp_wrapper 2025-03-17T17:41:35.8181047Z * [new branch] desertfire/torchgen_support_default_arg -> origin/desertfire/torchgen_support_default_arg 2025-03-17T17:41:35.8182606Z * [new branch] desertfire/triton-cpu-for-aarch64 -> origin/desertfire/triton-cpu-for-aarch64 2025-03-17T17:41:35.8183940Z * [new branch] desertfire/update_hf_pin -> origin/desertfire/update_hf_pin 2025-03-17T17:41:35.8185313Z * [new branch] dev/joona/MPSNDArrayAdd -> origin/dev/joona/MPSNDArrayAdd 2025-03-17T17:41:35.8186555Z * [new branch] dev/joona/Unranked -> origin/dev/joona/Unranked 2025-03-17T17:41:35.8187674Z * [new branch] dev/joona/embeddingbag -> origin/dev/joona/embeddingbag 2025-03-17T17:41:35.8188747Z * [new branch] dev/joona/sdpa -> origin/dev/joona/sdpa 2025-03-17T17:41:35.8189766Z * [new branch] dev/joona/unique_leak -> origin/dev/joona/unique_leak 2025-03-17T17:41:35.8190781Z * [new branch] dev/joona/upsize3d -> origin/dev/joona/upsize3d 2025-03-17T17:41:35.8191720Z * [new branch] disable -> origin/disable 2025-03-17T17:41:35.8192721Z * [new branch] disable_fp_contract_baseline -> origin/disable_fp_contract_baseline 2025-03-17T17:41:35.8194039Z * [new branch] distributed_checkpointing_e2e_tests -> origin/distributed_checkpointing_e2e_tests 2025-03-17T17:41:35.8195185Z * [new branch] doc_change -> origin/doc_change 2025-03-17T17:41:35.8196051Z * [new branch] docs_numpy -> origin/docs_numpy 2025-03-17T17:41:35.8196956Z * [new branch] dropout-eval -> origin/dropout-eval 2025-03-17T17:41:35.8197884Z * [new branch] dt_alltoall -> origin/dt_alltoall 2025-03-17T17:41:35.8198828Z * [new branch] dynamorunner_mp -> origin/dynamorunner_mp 2025-03-17T17:41:35.8199774Z * [new branch] e2e-baseline -> origin/e2e-baseline 2025-03-17T17:41:35.8200869Z * [new branch] eikanwang/eager_torch_compile -> origin/eikanwang/eager_torch_compile 2025-03-17T17:41:35.8202046Z * [new branch] embg/test_inductor_ci_128B -> origin/embg/test_inductor_ci_128B 2025-03-17T17:41:35.8203168Z * [new branch] embg/test_inductor_ci_base -> origin/embg/test_inductor_ci_base 2025-03-17T17:41:35.8204879Z * [new branch] embg/test_inductor_ci_control -> origin/embg/test_inductor_ci_control 2025-03-17T17:41:35.8206088Z * [new branch] embg/triton_l2_prefetch_128B -> origin/embg/triton_l2_prefetch_128B 2025-03-17T17:41:35.8207273Z * [new branch] embg/triton_l2_prefetch_256B -> origin/embg/triton_l2_prefetch_256B 2025-03-17T17:41:35.8208393Z * [new branch] eqy-patch-1 -> origin/eqy-patch-1 2025-03-17T17:41:35.8209301Z * [new branch] eqy-patch-2 -> origin/eqy-patch-2 2025-03-17T17:41:35.8210240Z * [new branch] eqy-patch-20 -> origin/eqy-patch-20 2025-03-17T17:41:35.8211154Z * [new branch] eqy-patch-21 -> origin/eqy-patch-21 2025-03-17T17:41:35.8212025Z * [new branch] eqy-patch-26 -> origin/eqy-patch-26 2025-03-17T17:41:35.8212957Z * [new branch] eqy-patch-3 -> origin/eqy-patch-3 2025-03-17T17:41:35.8213883Z * [new branch] eqy-patch-4 -> origin/eqy-patch-4 2025-03-17T17:41:35.8214734Z * [new branch] eqy-patch-5 -> origin/eqy-patch-5 2025-03-17T17:41:35.8215700Z * [new branch] eqy-patch-6 -> origin/eqy-patch-6 2025-03-17T17:41:35.8216928Z * [new branch] error-when-setattr-over-cls-attr -> origin/error-when-setattr-over-cls-attr 2025-03-17T17:41:35.8218131Z * [new branch] et_pin_bump -> origin/et_pin_bump 2025-03-17T17:41:35.8219267Z * [new branch] exclamaforte/aot-inductor-debug -> origin/exclamaforte/aot-inductor-debug 2025-03-17T17:41:35.8220704Z * [new branch] exclamaforte/aten-convolution-out -> origin/exclamaforte/aten-convolution-out 2025-03-17T17:41:35.8222220Z * [new branch] exclamaforte/combo-kernels-perf-run -> origin/exclamaforte/combo-kernels-perf-run 2025-03-17T17:41:35.8223680Z * [new branch] exclamaforte/delta -> origin/exclamaforte/delta 2025-03-17T17:41:35.8225146Z * [new branch] exclamaforte/dont-remove-feedback-functions -> origin/exclamaforte/dont-remove-feedback-functions 2025-03-17T17:41:35.8226832Z * [new branch] exclamaforte/dynamo-types -> origin/exclamaforte/dynamo-types 2025-03-17T17:41:35.8228195Z * [new branch] exclamaforte/enable-mem-dep-fusion -> origin/exclamaforte/enable-mem-dep-fusion 2025-03-17T17:41:35.8229532Z * [new branch] exclamaforte/fix-orig-svg -> origin/exclamaforte/fix-orig-svg 2025-03-17T17:41:35.8230950Z * [new branch] exclamaforte/fix-trace-parsing-fx-svg -> origin/exclamaforte/fix-trace-parsing-fx-svg 2025-03-17T17:41:35.8232626Z * [new branch] exclamaforte/force-pointwise-cat-perf-run -> origin/exclamaforte/force-pointwise-cat-perf-run 2025-03-17T17:41:35.8234094Z * [new branch] exclamaforte/fusion-data -> origin/exclamaforte/fusion-data 2025-03-17T17:41:35.8235443Z * [new branch] exclamaforte/heuristic-choices -> origin/exclamaforte/heuristic-choices 2025-03-17T17:41:35.8237040Z * [new branch] exclamaforte/heuristic-choices-2 -> origin/exclamaforte/heuristic-choices-2 2025-03-17T17:41:35.8238545Z * [new branch] exclamaforte/max-autotune-dtype-test -> origin/exclamaforte/max-autotune-dtype-test 2025-03-17T17:41:35.8240094Z * [new branch] exclamaforte/remove-desc-names -> origin/exclamaforte/remove-desc-names 2025-03-17T17:41:35.8241450Z * [new branch] exclamaforte/scheduler-refactor -> origin/exclamaforte/scheduler-refactor 2025-03-17T17:41:35.8242916Z * [new branch] exclamaforte/test_cpp_wrapper_mode -> origin/exclamaforte/test_cpp_wrapper_mode 2025-03-17T17:41:35.8244279Z * [new branch] exclamaforte/testing_only -> origin/exclamaforte/testing_only 2025-03-17T17:41:35.8245307Z * [new branch] exec -> origin/exec 2025-03-17T17:41:35.8246504Z * [new branch] experimental-mosaic -> origin/experimental-mosaic 2025-03-17T17:41:35.8247527Z * [new branch] export-D50544876 -> origin/export-D50544876 2025-03-17T17:41:35.8248537Z * [new branch] export-D51032385 -> origin/export-D51032385 2025-03-17T17:41:35.8249553Z * [new branch] export-D52434604 -> origin/export-D52434604 2025-03-17T17:41:35.8250567Z * [new branch] export-D58091437 -> origin/export-D58091437 2025-03-17T17:41:35.8251572Z * [new branch] export-D61047529 -> origin/export-D61047529 2025-03-17T17:41:35.8252569Z * [new branch] export-D61557220 -> origin/export-D61557220 2025-03-17T17:41:35.8253568Z * [new branch] export-D65638757 -> origin/export-D65638757 2025-03-17T17:41:35.8254594Z * [new branch] export-D66529288 -> origin/export-D66529288 2025-03-17T17:41:35.8255630Z * [new branch] export-D66690419 -> origin/export-D66690419 2025-03-17T17:41:35.8256884Z * [new branch] export-D66717302 -> origin/export-D66717302 2025-03-17T17:41:35.8257996Z * [new branch] export-D66908884 -> origin/export-D66908884 2025-03-17T17:41:35.8259445Z * [new branch] export-D68245292 -> origin/export-D68245292 2025-03-17T17:41:35.8260993Z * [new branch] export-D68909278 -> origin/export-D68909278 2025-03-17T17:41:35.8262537Z * [new branch] export-D69034578 -> origin/export-D69034578 2025-03-17T17:41:35.8263979Z * [new branch] export-D69355332 -> origin/export-D69355332 2025-03-17T17:41:35.8265455Z * [new branch] export-D69361235 -> origin/export-D69361235 2025-03-17T17:41:35.8267017Z * [new branch] export-D69592025 -> origin/export-D69592025 2025-03-17T17:41:35.8268733Z * [new branch] export-D69595327 -> origin/export-D69595327 2025-03-17T17:41:35.8270227Z * [new branch] export-D69994481 -> origin/export-D69994481 2025-03-17T17:41:35.8271740Z * [new branch] export-D70132269 -> origin/export-D70132269 2025-03-17T17:41:35.8273242Z * [new branch] export-D70141808 -> origin/export-D70141808 2025-03-17T17:41:35.8274870Z * [new branch] export-D70193972 -> origin/export-D70193972 2025-03-17T17:41:35.8276454Z * [new branch] export-D70454149 -> origin/export-D70454149 2025-03-17T17:41:35.8277902Z * [new branch] export-D71081192 -> origin/export-D71081192 2025-03-17T17:41:35.8279341Z * [new branch] export-D71229547 -> origin/export-D71229547 2025-03-17T17:41:35.8281211Z * [new branch] exported-model-train-idempotent -> origin/exported-model-train-idempotent 2025-03-17T17:41:35.8282795Z * [new branch] fa_u8_brgemm -> origin/fa_u8_brgemm 2025-03-17T17:41:35.8283841Z * [new branch] fastmath_baseline -> origin/fastmath_baseline 2025-03-17T17:41:35.8285689Z * [new branch] fbcode/warm -> origin/fbcode/warm 2025-03-17T17:41:35.8287075Z * [new branch] fca -> origin/fca 2025-03-17T17:41:35.8288449Z * [new branch] fca2_ca5984c -> origin/fca2_ca5984c 2025-03-17T17:41:35.8289870Z * [new branch] fca3 -> origin/fca3 2025-03-17T17:41:35.8291253Z * [new branch] fca5 -> origin/fca5 2025-03-17T17:41:35.8293343Z * [new branch] fengyuan/external-proj -> origin/fengyuan/external-proj 2025-03-17T17:41:35.8294805Z * [new branch] fengyuan/out-of-tree-xpu-ops-improve-test -> origin/fengyuan/out-of-tree-xpu-ops-improve-test 2025-03-17T17:41:35.8296704Z * [new branch] fengyuan/out-of-tree-xpu-ops-remove-dtype -> origin/fengyuan/out-of-tree-xpu-ops-remove-dtype 2025-03-17T17:41:35.8298071Z * [new branch] fengyuan/test-xpu -> origin/fengyuan/test-xpu 2025-03-17T17:41:35.8299119Z * [new branch] ffast_math_baseline -> origin/ffast_math_baseline 2025-03-17T17:41:35.8300360Z * [new branch] ffast_math_target -> origin/ffast_math_target 2025-03-17T17:41:35.8302217Z * [new branch] findhao/base_commit -> origin/findhao/base_commit 2025-03-17T17:41:35.8303711Z * [new branch] findhao/base_commit1 -> origin/findhao/base_commit1 2025-03-17T17:41:35.8305207Z * [new branch] findhao/fix-indirect-access -> origin/findhao/fix-indirect-access 2025-03-17T17:41:35.8306746Z * [new branch] findhao/fix-triton-constexpr -> origin/findhao/fix-triton-constexpr 2025-03-17T17:41:35.8308047Z * [new branch] findhao/multistream2 -> origin/findhao/multistream2 2025-03-17T17:41:35.8309389Z * [new branch] findhao/multistream5 -> origin/findhao/multistream5 2025-03-17T17:41:35.8310756Z * [new branch] findhao/operatorbench3 -> origin/findhao/operatorbench3 2025-03-17T17:41:35.8312453Z * [new branch] findhao/operatorbench5 -> origin/findhao/operatorbench5 2025-03-17T17:41:35.8313813Z * [new branch] fix -> origin/fix 2025-03-17T17:41:35.8315525Z * [new branch] fix-benchmark-config-h100 -> origin/fix-benchmark-config-h100 2025-03-17T17:41:35.8317107Z * [new branch] fix-cat-lowering-uint8-hack -> origin/fix-cat-lowering-uint8-hack 2025-03-17T17:41:35.8318433Z * [new branch] fix-config-ignore -> origin/fix-config-ignore 2025-03-17T17:41:35.8319953Z * [new branch] fix-dict-guard -> origin/fix-dict-guard 2025-03-17T17:41:35.8322454Z * [new branch] fix-ios-upload-credentials -> origin/fix-ios-upload-credentials 2025-03-17T17:41:35.8323888Z * [new branch] fix-mem-leak -> origin/fix-mem-leak 2025-03-17T17:41:35.8325076Z * [new branch] fix-qat-derived-qspec -> origin/fix-qat-derived-qspec 2025-03-17T17:41:35.8326452Z * [new branch] fix-test-stat-upload-failures -> origin/fix-test-stat-upload-failures 2025-03-17T17:41:35.8327751Z * [new branch] fix_allow_train_eval_msg -> origin/fix_allow_train_eval_msg 2025-03-17T17:41:35.8329111Z * [new branch] fix_autotune_inplace_test -> origin/fix_autotune_inplace_test 2025-03-17T17:41:35.8330467Z * [new branch] fix_avoid_record_stream -> origin/fix_avoid_record_stream 2025-03-17T17:41:35.8331722Z * [new branch] fix_e2e_fsdp_tp_pairwise -> origin/fix_e2e_fsdp_tp_pairwise 2025-03-17T17:41:35.8332984Z * [new branch] fix_partial -> origin/fix_partial 2025-03-17T17:41:35.8334393Z * [new branch] fix_xpu_content_store -> origin/fix_xpu_content_store 2025-03-17T17:41:35.8335670Z * [new branch] fixes-triage -> origin/fixes-triage 2025-03-17T17:41:35.8337053Z * [new branch] flat_apply -> origin/flat_apply 2025-03-17T17:41:35.8338555Z * [new branch] flex_attention_functorch_grad -> origin/flex_attention_functorch_grad 2025-03-17T17:41:35.8340322Z * [new branch] fmassa/fix_all_gather_cost -> origin/fmassa/fix_all_gather_cost 2025-03-17T17:41:35.8341589Z * [new branch] fmassa/fix_memeff_sharding_rule -> origin/fmassa/fix_memeff_sharding_rule 2025-03-17T17:41:35.8343208Z * [new branch] fmassa/memeff_shard0_rule -> origin/fmassa/memeff_shard0_rule 2025-03-17T17:41:35.8344655Z * [new branch] fmassa/partitioner_knapsack_checkpoint -> origin/fmassa/partitioner_knapsack_checkpoint 2025-03-17T17:41:35.8346124Z * [new branch] fp8_fix -> origin/fp8_fix 2025-03-17T17:41:35.8347377Z * [new branch] fsdp2_trace_rules -> origin/fsdp2_trace_rules 2025-03-17T17:41:35.8348398Z * [new branch] fsdpv2_3d -> origin/fsdpv2_3d 2025-03-17T17:41:35.8349936Z * [new branch] fsdpv2_3d_m1 -> origin/fsdpv2_3d_m1 2025-03-17T17:41:35.8351212Z * [new branch] func-attr -> origin/func-attr 2025-03-17T17:41:35.8352569Z * [new branch] functorch_scan -> origin/functorch_scan 2025-03-17T17:41:35.8353851Z * [new branch] fx_cpp -> origin/fx_cpp 2025-03-17T17:41:35.8355764Z * [new branch] fy/fix-win -> origin/fy/fix-win 2025-03-17T17:41:35.8356909Z * [new branch] gelu-3 -> origin/gelu-3 2025-03-17T17:41:35.8358365Z * [new branch] get_state_dict_forward_fix -> origin/get_state_dict_forward_fix 2025-03-17T17:41:35.8361030Z * [new branch] gh/AlnisM/1/base -> origin/gh/AlnisM/1/base 2025-03-17T17:41:35.8362101Z * [new branch] gh/AlnisM/1/head -> origin/gh/AlnisM/1/head 2025-03-17T17:41:35.8364434Z * [new branch] gh/BowenBao/296/base -> origin/gh/BowenBao/296/base 2025-03-17T17:41:35.8365509Z * [new branch] gh/BowenBao/296/head -> origin/gh/BowenBao/296/head 2025-03-17T17:41:35.8366886Z * [new branch] gh/BowenBao/296/orig -> origin/gh/BowenBao/296/orig 2025-03-17T17:41:35.8369416Z * [new branch] gh/CaoE/46/base -> origin/gh/CaoE/46/base 2025-03-17T17:41:35.8370520Z * [new branch] gh/CaoE/46/head -> origin/gh/CaoE/46/head 2025-03-17T17:41:35.8371793Z * [new branch] gh/CaoE/46/orig -> origin/gh/CaoE/46/orig 2025-03-17T17:41:35.8373687Z * [new branch] gh/CaoE/47/base -> origin/gh/CaoE/47/base 2025-03-17T17:41:35.8374813Z * [new branch] gh/CaoE/47/head -> origin/gh/CaoE/47/head 2025-03-17T17:41:35.8376146Z * [new branch] gh/CaoE/47/orig -> origin/gh/CaoE/47/orig 2025-03-17T17:41:35.8377708Z * [new branch] gh/CaoE/48/base -> origin/gh/CaoE/48/base 2025-03-17T17:41:35.8378976Z * [new branch] gh/CaoE/48/head -> origin/gh/CaoE/48/head 2025-03-17T17:41:35.8380272Z * [new branch] gh/CaoE/48/orig -> origin/gh/CaoE/48/orig 2025-03-17T17:41:35.8381951Z * [new branch] gh/CaoE/49/base -> origin/gh/CaoE/49/base 2025-03-17T17:41:35.8383197Z * [new branch] gh/CaoE/49/head -> origin/gh/CaoE/49/head 2025-03-17T17:41:35.8384509Z * [new branch] gh/CaoE/49/orig -> origin/gh/CaoE/49/orig 2025-03-17T17:41:35.8386204Z * [new branch] gh/CaoE/50/base -> origin/gh/CaoE/50/base 2025-03-17T17:41:35.8387596Z * [new branch] gh/CaoE/50/head -> origin/gh/CaoE/50/head 2025-03-17T17:41:35.8388899Z * [new branch] gh/CaoE/50/orig -> origin/gh/CaoE/50/orig 2025-03-17T17:41:35.8390774Z * [new branch] gh/CaoE/51/base -> origin/gh/CaoE/51/base 2025-03-17T17:41:35.8391845Z * [new branch] gh/CaoE/51/head -> origin/gh/CaoE/51/head 2025-03-17T17:41:35.8393154Z * [new branch] gh/CaoE/51/orig -> origin/gh/CaoE/51/orig 2025-03-17T17:41:35.8395380Z * [new branch] gh/ColinPeppler/62/base -> origin/gh/ColinPeppler/62/base 2025-03-17T17:41:35.8396882Z * [new branch] gh/ColinPeppler/62/head -> origin/gh/ColinPeppler/62/head 2025-03-17T17:41:35.8398081Z * [new branch] gh/ColinPeppler/62/orig -> origin/gh/ColinPeppler/62/orig 2025-03-17T17:41:35.8400691Z * [new branch] gh/EikanWang/67/base -> origin/gh/EikanWang/67/base 2025-03-17T17:41:35.8402209Z * [new branch] gh/EikanWang/67/head -> origin/gh/EikanWang/67/head 2025-03-17T17:41:35.8403635Z * [new branch] gh/EikanWang/75/base -> origin/gh/EikanWang/75/base 2025-03-17T17:41:35.8404956Z * [new branch] gh/EikanWang/75/head -> origin/gh/EikanWang/75/head 2025-03-17T17:41:35.8406233Z * [new branch] gh/EikanWang/75/orig -> origin/gh/EikanWang/75/orig 2025-03-17T17:41:35.8407928Z * [new branch] gh/EikanWang/76/base -> origin/gh/EikanWang/76/base 2025-03-17T17:41:35.8409182Z * [new branch] gh/EikanWang/76/head -> origin/gh/EikanWang/76/head 2025-03-17T17:41:35.8410501Z * [new branch] gh/EikanWang/76/orig -> origin/gh/EikanWang/76/orig 2025-03-17T17:41:35.8412144Z * [new branch] gh/EikanWang/77/base -> origin/gh/EikanWang/77/base 2025-03-17T17:41:35.8413453Z * [new branch] gh/EikanWang/77/head -> origin/gh/EikanWang/77/head 2025-03-17T17:41:35.8414795Z * [new branch] gh/EikanWang/77/orig -> origin/gh/EikanWang/77/orig 2025-03-17T17:41:35.8416394Z * [new branch] gh/EikanWang/78/base -> origin/gh/EikanWang/78/base 2025-03-17T17:41:35.8417672Z * [new branch] gh/EikanWang/78/head -> origin/gh/EikanWang/78/head 2025-03-17T17:41:35.8418929Z * [new branch] gh/EikanWang/78/orig -> origin/gh/EikanWang/78/orig 2025-03-17T17:41:35.8421221Z * [new branch] gh/Gasoonjia/1/base -> origin/gh/Gasoonjia/1/base 2025-03-17T17:41:35.8422501Z * [new branch] gh/Gasoonjia/1/head -> origin/gh/Gasoonjia/1/head 2025-03-17T17:41:35.8424470Z * [new branch] gh/H-Huang/131/base -> origin/gh/H-Huang/131/base 2025-03-17T17:41:35.8425812Z * [new branch] gh/H-Huang/131/head -> origin/gh/H-Huang/131/head 2025-03-17T17:41:35.8427322Z * [new branch] gh/H-Huang/131/orig -> origin/gh/H-Huang/131/orig 2025-03-17T17:41:35.8428890Z * [new branch] gh/H-Huang/132/base -> origin/gh/H-Huang/132/base 2025-03-17T17:41:35.8430156Z * [new branch] gh/H-Huang/132/head -> origin/gh/H-Huang/132/head 2025-03-17T17:41:35.8431450Z * [new branch] gh/H-Huang/132/orig -> origin/gh/H-Huang/132/orig 2025-03-17T17:41:35.8433182Z * [new branch] gh/H-Huang/160/base -> origin/gh/H-Huang/160/base 2025-03-17T17:41:35.8434507Z * [new branch] gh/H-Huang/160/head -> origin/gh/H-Huang/160/head 2025-03-17T17:41:35.8435814Z * [new branch] gh/H-Huang/160/orig -> origin/gh/H-Huang/160/orig 2025-03-17T17:41:35.8437701Z * [new branch] gh/H-Huang/167/base -> origin/gh/H-Huang/167/base 2025-03-17T17:41:35.8439024Z * [new branch] gh/H-Huang/167/head -> origin/gh/H-Huang/167/head 2025-03-17T17:41:35.8440314Z * [new branch] gh/H-Huang/167/orig -> origin/gh/H-Huang/167/orig 2025-03-17T17:41:35.8441947Z * [new branch] gh/H-Huang/168/base -> origin/gh/H-Huang/168/base 2025-03-17T17:41:35.8443238Z * [new branch] gh/H-Huang/168/head -> origin/gh/H-Huang/168/head 2025-03-17T17:41:35.8444476Z * [new branch] gh/H-Huang/168/orig -> origin/gh/H-Huang/168/orig 2025-03-17T17:41:35.8446002Z * [new branch] gh/H-Huang/169/base -> origin/gh/H-Huang/169/base 2025-03-17T17:41:35.8447305Z * [new branch] gh/H-Huang/169/head -> origin/gh/H-Huang/169/head 2025-03-17T17:41:35.8448551Z * [new branch] gh/H-Huang/169/orig -> origin/gh/H-Huang/169/orig 2025-03-17T17:41:35.8450236Z * [new branch] gh/H-Huang/170/base -> origin/gh/H-Huang/170/base 2025-03-17T17:41:35.8451589Z * [new branch] gh/H-Huang/170/head -> origin/gh/H-Huang/170/head 2025-03-17T17:41:35.8452970Z * [new branch] gh/H-Huang/170/orig -> origin/gh/H-Huang/170/orig 2025-03-17T17:41:35.8454611Z * [new branch] gh/H-Huang/171/base -> origin/gh/H-Huang/171/base 2025-03-17T17:41:35.8455923Z * [new branch] gh/H-Huang/171/head -> origin/gh/H-Huang/171/head 2025-03-17T17:41:35.8457242Z * [new branch] gh/H-Huang/171/orig -> origin/gh/H-Huang/171/orig 2025-03-17T17:41:35.8458918Z * [new branch] gh/H-Huang/172/base -> origin/gh/H-Huang/172/base 2025-03-17T17:41:35.8460370Z * [new branch] gh/H-Huang/172/head -> origin/gh/H-Huang/172/head 2025-03-17T17:41:35.8461722Z * [new branch] gh/H-Huang/172/orig -> origin/gh/H-Huang/172/orig 2025-03-17T17:41:35.8463716Z * [new branch] gh/HDCharles/168/base -> origin/gh/HDCharles/168/base 2025-03-17T17:41:35.8465174Z * [new branch] gh/HDCharles/168/head -> origin/gh/HDCharles/168/head 2025-03-17T17:41:35.8466636Z * [new branch] gh/HDCharles/168/orig -> origin/gh/HDCharles/168/orig 2025-03-17T17:41:35.8468571Z * [new branch] gh/IvanKobzarev/100/base -> origin/gh/IvanKobzarev/100/base 2025-03-17T17:41:35.8469876Z * [new branch] gh/IvanKobzarev/100/head -> origin/gh/IvanKobzarev/100/head 2025-03-17T17:41:35.8471183Z * [new branch] gh/IvanKobzarev/100/orig -> origin/gh/IvanKobzarev/100/orig 2025-03-17T17:41:35.8472828Z * [new branch] gh/IvanKobzarev/101/base -> origin/gh/IvanKobzarev/101/base 2025-03-17T17:41:35.8474111Z * [new branch] gh/IvanKobzarev/101/head -> origin/gh/IvanKobzarev/101/head 2025-03-17T17:41:35.8475436Z * [new branch] gh/IvanKobzarev/101/orig -> origin/gh/IvanKobzarev/101/orig 2025-03-17T17:41:35.8477356Z * [new branch] gh/IvanKobzarev/102/base -> origin/gh/IvanKobzarev/102/base 2025-03-17T17:41:35.8478634Z * [new branch] gh/IvanKobzarev/102/head -> origin/gh/IvanKobzarev/102/head 2025-03-17T17:41:35.8479940Z * [new branch] gh/IvanKobzarev/102/orig -> origin/gh/IvanKobzarev/102/orig 2025-03-17T17:41:35.8481590Z * [new branch] gh/IvanKobzarev/103/base -> origin/gh/IvanKobzarev/103/base 2025-03-17T17:41:35.8483032Z * [new branch] gh/IvanKobzarev/103/head -> origin/gh/IvanKobzarev/103/head 2025-03-17T17:41:35.8484354Z * [new branch] gh/IvanKobzarev/103/orig -> origin/gh/IvanKobzarev/103/orig 2025-03-17T17:41:35.8486007Z * [new branch] gh/IvanKobzarev/104/base -> origin/gh/IvanKobzarev/104/base 2025-03-17T17:41:35.8487328Z * [new branch] gh/IvanKobzarev/104/head -> origin/gh/IvanKobzarev/104/head 2025-03-17T17:41:35.8488658Z * [new branch] gh/IvanKobzarev/104/orig -> origin/gh/IvanKobzarev/104/orig 2025-03-17T17:41:35.8490500Z * [new branch] gh/IvanKobzarev/105/base -> origin/gh/IvanKobzarev/105/base 2025-03-17T17:41:35.8491802Z * [new branch] gh/IvanKobzarev/105/head -> origin/gh/IvanKobzarev/105/head 2025-03-17T17:41:35.8493104Z * [new branch] gh/IvanKobzarev/105/orig -> origin/gh/IvanKobzarev/105/orig 2025-03-17T17:41:35.8494907Z * [new branch] gh/IvanKobzarev/56/base -> origin/gh/IvanKobzarev/56/base 2025-03-17T17:41:35.8496297Z * [new branch] gh/IvanKobzarev/56/head -> origin/gh/IvanKobzarev/56/head 2025-03-17T17:41:35.8497600Z * [new branch] gh/IvanKobzarev/56/orig -> origin/gh/IvanKobzarev/56/orig 2025-03-17T17:41:35.8499261Z * [new branch] gh/IvanKobzarev/64/base -> origin/gh/IvanKobzarev/64/base 2025-03-17T17:41:35.8500627Z * [new branch] gh/IvanKobzarev/64/head -> origin/gh/IvanKobzarev/64/head 2025-03-17T17:41:35.8501967Z * [new branch] gh/IvanKobzarev/64/orig -> origin/gh/IvanKobzarev/64/orig 2025-03-17T17:41:35.8504126Z * [new branch] gh/IvanKobzarev/86/base -> origin/gh/IvanKobzarev/86/base 2025-03-17T17:41:35.8505447Z * [new branch] gh/IvanKobzarev/86/head -> origin/gh/IvanKobzarev/86/head 2025-03-17T17:41:35.8506915Z * [new branch] gh/IvanKobzarev/86/orig -> origin/gh/IvanKobzarev/86/orig 2025-03-17T17:41:35.8508650Z * [new branch] gh/IvanKobzarev/91/base -> origin/gh/IvanKobzarev/91/base 2025-03-17T17:41:35.8509994Z * [new branch] gh/IvanKobzarev/91/head -> origin/gh/IvanKobzarev/91/head 2025-03-17T17:41:35.8511316Z * [new branch] gh/IvanKobzarev/91/orig -> origin/gh/IvanKobzarev/91/orig 2025-03-17T17:41:35.8513005Z * [new branch] gh/IvanKobzarev/92/base -> origin/gh/IvanKobzarev/92/base 2025-03-17T17:41:35.8514587Z * [new branch] gh/IvanKobzarev/92/head -> origin/gh/IvanKobzarev/92/head 2025-03-17T17:41:35.8515921Z * [new branch] gh/IvanKobzarev/92/orig -> origin/gh/IvanKobzarev/92/orig 2025-03-17T17:41:35.8517785Z * [new branch] gh/IvanKobzarev/93/base -> origin/gh/IvanKobzarev/93/base 2025-03-17T17:41:35.8519149Z * [new branch] gh/IvanKobzarev/93/head -> origin/gh/IvanKobzarev/93/head 2025-03-17T17:41:35.8520460Z * [new branch] gh/IvanKobzarev/93/orig -> origin/gh/IvanKobzarev/93/orig 2025-03-17T17:41:35.8522471Z * [new branch] gh/IvanKobzarev/94/base -> origin/gh/IvanKobzarev/94/base 2025-03-17T17:41:35.8523639Z * [new branch] gh/IvanKobzarev/94/head -> origin/gh/IvanKobzarev/94/head 2025-03-17T17:41:35.8524988Z * [new branch] gh/IvanKobzarev/94/orig -> origin/gh/IvanKobzarev/94/orig 2025-03-17T17:41:35.8526680Z * [new branch] gh/IvanKobzarev/98/base -> origin/gh/IvanKobzarev/98/base 2025-03-17T17:41:35.8527787Z * [new branch] gh/IvanKobzarev/98/head -> origin/gh/IvanKobzarev/98/head 2025-03-17T17:41:35.8529133Z * [new branch] gh/IvanKobzarev/98/orig -> origin/gh/IvanKobzarev/98/orig 2025-03-17T17:41:35.8530977Z * [new branch] gh/Lezcano/243/base -> origin/gh/Lezcano/243/base 2025-03-17T17:41:35.8532333Z * [new branch] gh/Lezcano/243/head -> origin/gh/Lezcano/243/head 2025-03-17T17:41:35.8533703Z * [new branch] gh/Lezcano/243/orig -> origin/gh/Lezcano/243/orig 2025-03-17T17:41:35.8535559Z * [new branch] gh/SS-JIA/164/base -> origin/gh/SS-JIA/164/base 2025-03-17T17:41:35.8537009Z * [new branch] gh/SS-JIA/164/head -> origin/gh/SS-JIA/164/head 2025-03-17T17:41:35.8538713Z * [new branch] gh/SS-JIA/172/base -> origin/gh/SS-JIA/172/base 2025-03-17T17:41:35.8540097Z * [new branch] gh/SS-JIA/172/head -> origin/gh/SS-JIA/172/head 2025-03-17T17:41:35.8541442Z * [new branch] gh/SS-JIA/172/orig -> origin/gh/SS-JIA/172/orig 2025-03-17T17:41:35.8543706Z * [new branch] gh/SamGinzburg/11/base -> origin/gh/SamGinzburg/11/base 2025-03-17T17:41:35.8545026Z * [new branch] gh/SamGinzburg/11/head -> origin/gh/SamGinzburg/11/head 2025-03-17T17:41:35.8547064Z * [new branch] gh/StrongerXi/1/base -> origin/gh/StrongerXi/1/base 2025-03-17T17:41:35.8548321Z * [new branch] gh/StrongerXi/1/head -> origin/gh/StrongerXi/1/head 2025-03-17T17:41:35.8549939Z * [new branch] gh/StrongerXi/63/base -> origin/gh/StrongerXi/63/base 2025-03-17T17:41:35.8551264Z * [new branch] gh/StrongerXi/63/head -> origin/gh/StrongerXi/63/head 2025-03-17T17:41:35.8552566Z * [new branch] gh/StrongerXi/63/orig -> origin/gh/StrongerXi/63/orig 2025-03-17T17:41:35.8554175Z * [new branch] gh/StrongerXi/67/base -> origin/gh/StrongerXi/67/base 2025-03-17T17:41:35.8555490Z * [new branch] gh/StrongerXi/67/head -> origin/gh/StrongerXi/67/head 2025-03-17T17:41:35.8556733Z * [new branch] gh/StrongerXi/67/orig -> origin/gh/StrongerXi/67/orig 2025-03-17T17:41:35.8558460Z * [new branch] gh/StrongerXi/71/base -> origin/gh/StrongerXi/71/base 2025-03-17T17:41:35.8559771Z * [new branch] gh/StrongerXi/71/head -> origin/gh/StrongerXi/71/head 2025-03-17T17:41:35.8561328Z * [new branch] gh/StrongerXi/72/base -> origin/gh/StrongerXi/72/base 2025-03-17T17:41:35.8562712Z * [new branch] gh/StrongerXi/72/head -> origin/gh/StrongerXi/72/head 2025-03-17T17:41:35.8564325Z * [new branch] gh/StrongerXi/81/base -> origin/gh/StrongerXi/81/base 2025-03-17T17:41:35.8565617Z * [new branch] gh/StrongerXi/81/head -> origin/gh/StrongerXi/81/head 2025-03-17T17:41:35.8567073Z * [new branch] gh/StrongerXi/81/orig -> origin/gh/StrongerXi/81/orig 2025-03-17T17:41:35.8568710Z * [new branch] gh/StrongerXi/82/base -> origin/gh/StrongerXi/82/base 2025-03-17T17:41:35.8569963Z * [new branch] gh/StrongerXi/82/head -> origin/gh/StrongerXi/82/head 2025-03-17T17:41:35.8573404Z * [new branch] gh/StrongerXi/82/orig -> origin/gh/StrongerXi/82/orig 2025-03-17T17:41:35.8574339Z * [new branch] gh/StrongerXi/83/base -> origin/gh/StrongerXi/83/base 2025-03-17T17:41:35.8575150Z * [new branch] gh/StrongerXi/83/head -> origin/gh/StrongerXi/83/head 2025-03-17T17:41:35.8575917Z * [new branch] gh/StrongerXi/83/orig -> origin/gh/StrongerXi/83/orig 2025-03-17T17:41:35.8576683Z * [new branch] gh/StrongerXi/84/base -> origin/gh/StrongerXi/84/base 2025-03-17T17:41:35.8577605Z * [new branch] gh/StrongerXi/84/head -> origin/gh/StrongerXi/84/head 2025-03-17T17:41:35.8578640Z * [new branch] gh/StrongerXi/84/orig -> origin/gh/StrongerXi/84/orig 2025-03-17T17:41:35.8580184Z * [new branch] gh/StrongerXi/85/base -> origin/gh/StrongerXi/85/base 2025-03-17T17:41:35.8580978Z * [new branch] gh/StrongerXi/85/head -> origin/gh/StrongerXi/85/head 2025-03-17T17:41:35.8581992Z * [new branch] gh/StrongerXi/85/orig -> origin/gh/StrongerXi/85/orig 2025-03-17T17:41:35.8583370Z * [new branch] gh/StrongerXi/86/base -> origin/gh/StrongerXi/86/base 2025-03-17T17:41:35.8584261Z * [new branch] gh/StrongerXi/86/head -> origin/gh/StrongerXi/86/head 2025-03-17T17:41:35.8585395Z * [new branch] gh/StrongerXi/86/orig -> origin/gh/StrongerXi/86/orig 2025-03-17T17:41:35.8586824Z * [new branch] gh/StrongerXi/87/base -> origin/gh/StrongerXi/87/base 2025-03-17T17:41:35.8587784Z * [new branch] gh/StrongerXi/87/head -> origin/gh/StrongerXi/87/head 2025-03-17T17:41:35.8588764Z * [new branch] gh/StrongerXi/87/orig -> origin/gh/StrongerXi/87/orig 2025-03-17T17:41:35.8590128Z * [new branch] gh/StrongerXi/88/base -> origin/gh/StrongerXi/88/base 2025-03-17T17:41:35.8591016Z * [new branch] gh/StrongerXi/88/head -> origin/gh/StrongerXi/88/head 2025-03-17T17:41:35.8592054Z * [new branch] gh/StrongerXi/88/orig -> origin/gh/StrongerXi/88/orig 2025-03-17T17:41:35.8593431Z * [new branch] gh/StrongerXi/89/base -> origin/gh/StrongerXi/89/base 2025-03-17T17:41:35.8594422Z * [new branch] gh/StrongerXi/89/head -> origin/gh/StrongerXi/89/head 2025-03-17T17:41:35.8595403Z * [new branch] gh/StrongerXi/89/orig -> origin/gh/StrongerXi/89/orig 2025-03-17T17:41:35.8596790Z * [new branch] gh/StrongerXi/90/base -> origin/gh/StrongerXi/90/base 2025-03-17T17:41:35.8597662Z * [new branch] gh/StrongerXi/90/head -> origin/gh/StrongerXi/90/head 2025-03-17T17:41:35.8598687Z * [new branch] gh/StrongerXi/90/orig -> origin/gh/StrongerXi/90/orig 2025-03-17T17:41:35.8600055Z * [new branch] gh/StrongerXi/91/base -> origin/gh/StrongerXi/91/base 2025-03-17T17:41:35.8600955Z * [new branch] gh/StrongerXi/91/head -> origin/gh/StrongerXi/91/head 2025-03-17T17:41:35.8601984Z * [new branch] gh/StrongerXi/91/orig -> origin/gh/StrongerXi/91/orig 2025-03-17T17:41:35.8603376Z * [new branch] gh/StrongerXi/92/base -> origin/gh/StrongerXi/92/base 2025-03-17T17:41:35.8604264Z * [new branch] gh/StrongerXi/92/head -> origin/gh/StrongerXi/92/head 2025-03-17T17:41:35.8605239Z * [new branch] gh/StrongerXi/92/orig -> origin/gh/StrongerXi/92/orig 2025-03-17T17:41:35.8606610Z * [new branch] gh/StrongerXi/93/base -> origin/gh/StrongerXi/93/base 2025-03-17T17:41:35.8607495Z * [new branch] gh/StrongerXi/93/head -> origin/gh/StrongerXi/93/head 2025-03-17T17:41:35.8608504Z * [new branch] gh/StrongerXi/93/orig -> origin/gh/StrongerXi/93/orig 2025-03-17T17:41:35.8610221Z * [new branch] gh/Xia-Weiwen/28/base -> origin/gh/Xia-Weiwen/28/base 2025-03-17T17:41:35.8611106Z * [new branch] gh/Xia-Weiwen/28/head -> origin/gh/Xia-Weiwen/28/head 2025-03-17T17:41:35.8612068Z * [new branch] gh/Xia-Weiwen/28/orig -> origin/gh/Xia-Weiwen/28/orig 2025-03-17T17:41:35.8613619Z * [new branch] gh/Xia-Weiwen/29/base -> origin/gh/Xia-Weiwen/29/base 2025-03-17T17:41:35.8614528Z * [new branch] gh/Xia-Weiwen/29/head -> origin/gh/Xia-Weiwen/29/head 2025-03-17T17:41:35.8615507Z * [new branch] gh/Xia-Weiwen/29/orig -> origin/gh/Xia-Weiwen/29/orig 2025-03-17T17:41:35.8616880Z * [new branch] gh/Xia-Weiwen/30/base -> origin/gh/Xia-Weiwen/30/base 2025-03-17T17:41:35.8617877Z * [new branch] gh/Xia-Weiwen/30/head -> origin/gh/Xia-Weiwen/30/head 2025-03-17T17:41:35.8618890Z * [new branch] gh/Xia-Weiwen/30/orig -> origin/gh/Xia-Weiwen/30/orig 2025-03-17T17:41:35.8620209Z * [new branch] gh/Xia-Weiwen/31/base -> origin/gh/Xia-Weiwen/31/base 2025-03-17T17:41:35.8621102Z * [new branch] gh/Xia-Weiwen/31/head -> origin/gh/Xia-Weiwen/31/head 2025-03-17T17:41:35.8622078Z * [new branch] gh/Xia-Weiwen/31/orig -> origin/gh/Xia-Weiwen/31/orig 2025-03-17T17:41:35.8624039Z * [new branch] gh/XilunWu/110/base -> origin/gh/XilunWu/110/base 2025-03-17T17:41:35.8624933Z * [new branch] gh/XilunWu/110/head -> origin/gh/XilunWu/110/head 2025-03-17T17:41:35.8625916Z * [new branch] gh/XilunWu/110/orig -> origin/gh/XilunWu/110/orig 2025-03-17T17:41:35.8627580Z * [new branch] gh/XilunWu/114/base -> origin/gh/XilunWu/114/base 2025-03-17T17:41:35.8628512Z * [new branch] gh/XilunWu/114/head -> origin/gh/XilunWu/114/head 2025-03-17T17:41:35.8629550Z * [new branch] gh/XilunWu/114/orig -> origin/gh/XilunWu/114/orig 2025-03-17T17:41:35.8630772Z * [new branch] gh/XilunWu/115/base -> origin/gh/XilunWu/115/base 2025-03-17T17:41:35.8631708Z * [new branch] gh/XilunWu/115/head -> origin/gh/XilunWu/115/head 2025-03-17T17:41:35.8632640Z * [new branch] gh/XilunWu/115/orig -> origin/gh/XilunWu/115/orig 2025-03-17T17:41:35.8633889Z * [new branch] gh/XilunWu/116/base -> origin/gh/XilunWu/116/base 2025-03-17T17:41:35.8634867Z * [new branch] gh/XilunWu/116/head -> origin/gh/XilunWu/116/head 2025-03-17T17:41:35.8635899Z * [new branch] gh/XilunWu/116/orig -> origin/gh/XilunWu/116/orig 2025-03-17T17:41:35.8637202Z * [new branch] gh/XilunWu/117/base -> origin/gh/XilunWu/117/base 2025-03-17T17:41:35.8638225Z * [new branch] gh/XilunWu/117/head -> origin/gh/XilunWu/117/head 2025-03-17T17:41:35.8639212Z * [new branch] gh/XilunWu/117/orig -> origin/gh/XilunWu/117/orig 2025-03-17T17:41:35.8640436Z * [new branch] gh/XilunWu/118/base -> origin/gh/XilunWu/118/base 2025-03-17T17:41:35.8641249Z * [new branch] gh/XilunWu/118/head -> origin/gh/XilunWu/118/head 2025-03-17T17:41:35.8642148Z * [new branch] gh/XilunWu/118/orig -> origin/gh/XilunWu/118/orig 2025-03-17T17:41:35.8643430Z * [new branch] gh/XilunWu/119/base -> origin/gh/XilunWu/119/base 2025-03-17T17:41:35.8644374Z * [new branch] gh/XilunWu/119/head -> origin/gh/XilunWu/119/head 2025-03-17T17:41:35.8645320Z * [new branch] gh/XilunWu/119/orig -> origin/gh/XilunWu/119/orig 2025-03-17T17:41:35.8646681Z * [new branch] gh/XilunWu/120/base -> origin/gh/XilunWu/120/base 2025-03-17T17:41:35.8647681Z * [new branch] gh/XilunWu/120/head -> origin/gh/XilunWu/120/head 2025-03-17T17:41:35.8648631Z * [new branch] gh/XilunWu/120/orig -> origin/gh/XilunWu/120/orig 2025-03-17T17:41:35.8649884Z * [new branch] gh/XilunWu/121/base -> origin/gh/XilunWu/121/base 2025-03-17T17:41:35.8650795Z * [new branch] gh/XilunWu/121/head -> origin/gh/XilunWu/121/head 2025-03-17T17:41:35.8651795Z * [new branch] gh/XilunWu/121/orig -> origin/gh/XilunWu/121/orig 2025-03-17T17:41:35.8653021Z * [new branch] gh/XilunWu/122/base -> origin/gh/XilunWu/122/base 2025-03-17T17:41:35.8653940Z * [new branch] gh/XilunWu/122/head -> origin/gh/XilunWu/122/head 2025-03-17T17:41:35.8654914Z * [new branch] gh/XilunWu/122/orig -> origin/gh/XilunWu/122/orig 2025-03-17T17:41:35.8656344Z * [new branch] gh/XilunWu/123/base -> origin/gh/XilunWu/123/base 2025-03-17T17:41:35.8657282Z * [new branch] gh/XilunWu/123/head -> origin/gh/XilunWu/123/head 2025-03-17T17:41:35.8658201Z * [new branch] gh/XilunWu/123/orig -> origin/gh/XilunWu/123/orig 2025-03-17T17:41:35.8659654Z * [new branch] gh/XilunWu/124/base -> origin/gh/XilunWu/124/base 2025-03-17T17:41:35.8660474Z * [new branch] gh/XilunWu/124/head -> origin/gh/XilunWu/124/head 2025-03-17T17:41:35.8661462Z * [new branch] gh/XilunWu/124/orig -> origin/gh/XilunWu/124/orig 2025-03-17T17:41:35.8662926Z * [new branch] gh/XilunWu/125/base -> origin/gh/XilunWu/125/base 2025-03-17T17:41:35.8663813Z * [new branch] gh/XilunWu/125/head -> origin/gh/XilunWu/125/head 2025-03-17T17:41:35.8664769Z * [new branch] gh/XilunWu/125/orig -> origin/gh/XilunWu/125/orig 2025-03-17T17:41:35.8666510Z * [new branch] gh/XuehaiPan/1/base -> origin/gh/XuehaiPan/1/base 2025-03-17T17:41:35.8667520Z * [new branch] gh/XuehaiPan/1/head -> origin/gh/XuehaiPan/1/head 2025-03-17T17:41:35.8668468Z * [new branch] gh/XuehaiPan/1/orig -> origin/gh/XuehaiPan/1/orig 2025-03-17T17:41:35.8669947Z * [new branch] gh/XuehaiPan/105/base -> origin/gh/XuehaiPan/105/base 2025-03-17T17:41:35.8670888Z * [new branch] gh/XuehaiPan/105/head -> origin/gh/XuehaiPan/105/head 2025-03-17T17:41:35.8671866Z * [new branch] gh/XuehaiPan/105/orig -> origin/gh/XuehaiPan/105/orig 2025-03-17T17:41:35.8673405Z * [new branch] gh/XuehaiPan/108/base -> origin/gh/XuehaiPan/108/base 2025-03-17T17:41:35.8674309Z * [new branch] gh/XuehaiPan/108/head -> origin/gh/XuehaiPan/108/head 2025-03-17T17:41:35.8675323Z * [new branch] gh/XuehaiPan/108/orig -> origin/gh/XuehaiPan/108/orig 2025-03-17T17:41:35.8676818Z * [new branch] gh/XuehaiPan/109/base -> origin/gh/XuehaiPan/109/base 2025-03-17T17:41:35.8677714Z * [new branch] gh/XuehaiPan/109/head -> origin/gh/XuehaiPan/109/head 2025-03-17T17:41:35.8678860Z * [new branch] gh/XuehaiPan/109/orig -> origin/gh/XuehaiPan/109/orig 2025-03-17T17:41:35.8680567Z * [new branch] gh/XuehaiPan/13/base -> origin/gh/XuehaiPan/13/base 2025-03-17T17:41:35.8681513Z * [new branch] gh/XuehaiPan/13/head -> origin/gh/XuehaiPan/13/head 2025-03-17T17:41:35.8682530Z * [new branch] gh/XuehaiPan/13/orig -> origin/gh/XuehaiPan/13/orig 2025-03-17T17:41:35.8683888Z * [new branch] gh/XuehaiPan/14/base -> origin/gh/XuehaiPan/14/base 2025-03-17T17:41:35.8684833Z * [new branch] gh/XuehaiPan/14/head -> origin/gh/XuehaiPan/14/head 2025-03-17T17:41:35.8685787Z * [new branch] gh/XuehaiPan/14/orig -> origin/gh/XuehaiPan/14/orig 2025-03-17T17:41:35.8687152Z * [new branch] gh/XuehaiPan/179/base -> origin/gh/XuehaiPan/179/base 2025-03-17T17:41:35.8688052Z * [new branch] gh/XuehaiPan/179/head -> origin/gh/XuehaiPan/179/head 2025-03-17T17:41:35.8689139Z * [new branch] gh/XuehaiPan/179/orig -> origin/gh/XuehaiPan/179/orig 2025-03-17T17:41:35.8690512Z * [new branch] gh/XuehaiPan/180/base -> origin/gh/XuehaiPan/180/base 2025-03-17T17:41:35.8691398Z * [new branch] gh/XuehaiPan/180/head -> origin/gh/XuehaiPan/180/head 2025-03-17T17:41:35.8692416Z * [new branch] gh/XuehaiPan/180/orig -> origin/gh/XuehaiPan/180/orig 2025-03-17T17:41:35.8693769Z * [new branch] gh/XuehaiPan/182/base -> origin/gh/XuehaiPan/182/base 2025-03-17T17:41:35.8694669Z * [new branch] gh/XuehaiPan/182/head -> origin/gh/XuehaiPan/182/head 2025-03-17T17:41:35.8695628Z * [new branch] gh/XuehaiPan/182/orig -> origin/gh/XuehaiPan/182/orig 2025-03-17T17:41:35.8697035Z * [new branch] gh/XuehaiPan/183/base -> origin/gh/XuehaiPan/183/base 2025-03-17T17:41:35.8697955Z * [new branch] gh/XuehaiPan/183/head -> origin/gh/XuehaiPan/183/head 2025-03-17T17:41:35.8698913Z * [new branch] gh/XuehaiPan/183/orig -> origin/gh/XuehaiPan/183/orig 2025-03-17T17:41:35.8700279Z * [new branch] gh/XuehaiPan/185/base -> origin/gh/XuehaiPan/185/base 2025-03-17T17:41:35.8701184Z * [new branch] gh/XuehaiPan/185/head -> origin/gh/XuehaiPan/185/head 2025-03-17T17:41:35.8702153Z * [new branch] gh/XuehaiPan/185/orig -> origin/gh/XuehaiPan/185/orig 2025-03-17T17:41:35.8703502Z * [new branch] gh/XuehaiPan/188/base -> origin/gh/XuehaiPan/188/base 2025-03-17T17:41:35.8704390Z * [new branch] gh/XuehaiPan/188/head -> origin/gh/XuehaiPan/188/head 2025-03-17T17:41:35.8705372Z * [new branch] gh/XuehaiPan/188/orig -> origin/gh/XuehaiPan/188/orig 2025-03-17T17:41:35.8706741Z * [new branch] gh/XuehaiPan/189/base -> origin/gh/XuehaiPan/189/base 2025-03-17T17:41:35.8707696Z * [new branch] gh/XuehaiPan/189/head -> origin/gh/XuehaiPan/189/head 2025-03-17T17:41:35.8708709Z * [new branch] gh/XuehaiPan/189/orig -> origin/gh/XuehaiPan/189/orig 2025-03-17T17:41:35.8710099Z * [new branch] gh/XuehaiPan/210/base -> origin/gh/XuehaiPan/210/base 2025-03-17T17:41:35.8710947Z * [new branch] gh/XuehaiPan/210/head -> origin/gh/XuehaiPan/210/head 2025-03-17T17:41:35.8711911Z * [new branch] gh/XuehaiPan/210/orig -> origin/gh/XuehaiPan/210/orig 2025-03-17T17:41:35.8713308Z * [new branch] gh/XuehaiPan/211/base -> origin/gh/XuehaiPan/211/base 2025-03-17T17:41:35.8714205Z * [new branch] gh/XuehaiPan/211/head -> origin/gh/XuehaiPan/211/head 2025-03-17T17:41:35.8715260Z * [new branch] gh/XuehaiPan/211/orig -> origin/gh/XuehaiPan/211/orig 2025-03-17T17:41:35.8716554Z * [new branch] gh/XuehaiPan/217/base -> origin/gh/XuehaiPan/217/base 2025-03-17T17:41:35.8717493Z * [new branch] gh/XuehaiPan/217/head -> origin/gh/XuehaiPan/217/head 2025-03-17T17:41:35.8718462Z * [new branch] gh/XuehaiPan/217/orig -> origin/gh/XuehaiPan/217/orig 2025-03-17T17:41:35.8719870Z * [new branch] gh/XuehaiPan/218/base -> origin/gh/XuehaiPan/218/base 2025-03-17T17:41:35.8720736Z * [new branch] gh/XuehaiPan/218/head -> origin/gh/XuehaiPan/218/head 2025-03-17T17:41:35.8721697Z * [new branch] gh/XuehaiPan/218/orig -> origin/gh/XuehaiPan/218/orig 2025-03-17T17:41:35.8723409Z * [new branch] gh/XuehaiPan/219/base -> origin/gh/XuehaiPan/219/base 2025-03-17T17:41:35.8724328Z * [new branch] gh/XuehaiPan/219/head -> origin/gh/XuehaiPan/219/head 2025-03-17T17:41:35.8725364Z * [new branch] gh/XuehaiPan/219/orig -> origin/gh/XuehaiPan/219/orig 2025-03-17T17:41:35.8726784Z * [new branch] gh/XuehaiPan/221/base -> origin/gh/XuehaiPan/221/base 2025-03-17T17:41:35.8727701Z * [new branch] gh/XuehaiPan/221/head -> origin/gh/XuehaiPan/221/head 2025-03-17T17:41:35.8728761Z * [new branch] gh/XuehaiPan/221/orig -> origin/gh/XuehaiPan/221/orig 2025-03-17T17:41:35.8730146Z * [new branch] gh/XuehaiPan/222/base -> origin/gh/XuehaiPan/222/base 2025-03-17T17:41:35.8731089Z * [new branch] gh/XuehaiPan/222/head -> origin/gh/XuehaiPan/222/head 2025-03-17T17:41:35.8732132Z * [new branch] gh/XuehaiPan/222/orig -> origin/gh/XuehaiPan/222/orig 2025-03-17T17:41:35.8733570Z * [new branch] gh/XuehaiPan/223/base -> origin/gh/XuehaiPan/223/base 2025-03-17T17:41:35.8734547Z * [new branch] gh/XuehaiPan/223/head -> origin/gh/XuehaiPan/223/head 2025-03-17T17:41:35.8735475Z * [new branch] gh/XuehaiPan/223/orig -> origin/gh/XuehaiPan/223/orig 2025-03-17T17:41:35.8737046Z * [new branch] gh/XuehaiPan/224/base -> origin/gh/XuehaiPan/224/base 2025-03-17T17:41:35.8738039Z * [new branch] gh/XuehaiPan/224/head -> origin/gh/XuehaiPan/224/head 2025-03-17T17:41:35.8739027Z * [new branch] gh/XuehaiPan/224/orig -> origin/gh/XuehaiPan/224/orig 2025-03-17T17:41:35.8740465Z * [new branch] gh/XuehaiPan/225/base -> origin/gh/XuehaiPan/225/base 2025-03-17T17:41:35.8741413Z * [new branch] gh/XuehaiPan/225/head -> origin/gh/XuehaiPan/225/head 2025-03-17T17:41:35.8742354Z * [new branch] gh/XuehaiPan/225/orig -> origin/gh/XuehaiPan/225/orig 2025-03-17T17:41:35.8743757Z * [new branch] gh/XuehaiPan/226/base -> origin/gh/XuehaiPan/226/base 2025-03-17T17:41:35.8744637Z * [new branch] gh/XuehaiPan/226/head -> origin/gh/XuehaiPan/226/head 2025-03-17T17:41:35.8745658Z * [new branch] gh/XuehaiPan/226/orig -> origin/gh/XuehaiPan/226/orig 2025-03-17T17:41:35.8747174Z * [new branch] gh/XuehaiPan/227/base -> origin/gh/XuehaiPan/227/base 2025-03-17T17:41:35.8748075Z * [new branch] gh/XuehaiPan/227/head -> origin/gh/XuehaiPan/227/head 2025-03-17T17:41:35.8749171Z * [new branch] gh/XuehaiPan/227/orig -> origin/gh/XuehaiPan/227/orig 2025-03-17T17:41:35.8750462Z * [new branch] gh/XuehaiPan/228/base -> origin/gh/XuehaiPan/228/base 2025-03-17T17:41:35.8751384Z * [new branch] gh/XuehaiPan/228/head -> origin/gh/XuehaiPan/228/head 2025-03-17T17:41:35.8752313Z * [new branch] gh/XuehaiPan/228/orig -> origin/gh/XuehaiPan/228/orig 2025-03-17T17:41:35.8753684Z * [new branch] gh/XuehaiPan/229/base -> origin/gh/XuehaiPan/229/base 2025-03-17T17:41:35.8754579Z * [new branch] gh/XuehaiPan/229/head -> origin/gh/XuehaiPan/229/head 2025-03-17T17:41:35.8755436Z * [new branch] gh/XuehaiPan/229/orig -> origin/gh/XuehaiPan/229/orig 2025-03-17T17:41:35.8756758Z * [new branch] gh/XuehaiPan/230/base -> origin/gh/XuehaiPan/230/base 2025-03-17T17:41:35.8757675Z * [new branch] gh/XuehaiPan/230/head -> origin/gh/XuehaiPan/230/head 2025-03-17T17:41:35.8758622Z * [new branch] gh/XuehaiPan/230/orig -> origin/gh/XuehaiPan/230/orig 2025-03-17T17:41:35.8760047Z * [new branch] gh/XuehaiPan/231/base -> origin/gh/XuehaiPan/231/base 2025-03-17T17:41:35.8760957Z * [new branch] gh/XuehaiPan/231/head -> origin/gh/XuehaiPan/231/head 2025-03-17T17:41:35.8761956Z * [new branch] gh/XuehaiPan/231/orig -> origin/gh/XuehaiPan/231/orig 2025-03-17T17:41:35.8763290Z * [new branch] gh/XuehaiPan/232/base -> origin/gh/XuehaiPan/232/base 2025-03-17T17:41:35.8764206Z * [new branch] gh/XuehaiPan/232/head -> origin/gh/XuehaiPan/232/head 2025-03-17T17:41:35.8765174Z * [new branch] gh/XuehaiPan/232/orig -> origin/gh/XuehaiPan/232/orig 2025-03-17T17:41:35.8766691Z * [new branch] gh/XuehaiPan/233/base -> origin/gh/XuehaiPan/233/base 2025-03-17T17:41:35.8767608Z * [new branch] gh/XuehaiPan/233/head -> origin/gh/XuehaiPan/233/head 2025-03-17T17:41:35.8768621Z * [new branch] gh/XuehaiPan/233/orig -> origin/gh/XuehaiPan/233/orig 2025-03-17T17:41:35.8770032Z * [new branch] gh/XuehaiPan/234/base -> origin/gh/XuehaiPan/234/base 2025-03-17T17:41:35.8771029Z * [new branch] gh/XuehaiPan/234/head -> origin/gh/XuehaiPan/234/head 2025-03-17T17:41:35.8772106Z * [new branch] gh/XuehaiPan/234/orig -> origin/gh/XuehaiPan/234/orig 2025-03-17T17:41:35.8773866Z * [new branch] gh/XuehaiPan/236/base -> origin/gh/XuehaiPan/236/base 2025-03-17T17:41:35.8774550Z * [new branch] gh/XuehaiPan/236/head -> origin/gh/XuehaiPan/236/head 2025-03-17T17:41:35.8775519Z * [new branch] gh/XuehaiPan/236/orig -> origin/gh/XuehaiPan/236/orig 2025-03-17T17:41:35.8776988Z * [new branch] gh/XuehaiPan/237/base -> origin/gh/XuehaiPan/237/base 2025-03-17T17:41:35.8777921Z * [new branch] gh/XuehaiPan/237/head -> origin/gh/XuehaiPan/237/head 2025-03-17T17:41:35.8778880Z * [new branch] gh/XuehaiPan/237/orig -> origin/gh/XuehaiPan/237/orig 2025-03-17T17:41:35.8780317Z * [new branch] gh/XuehaiPan/238/base -> origin/gh/XuehaiPan/238/base 2025-03-17T17:41:35.8781358Z * [new branch] gh/XuehaiPan/238/head -> origin/gh/XuehaiPan/238/head 2025-03-17T17:41:35.8782685Z * [new branch] gh/XuehaiPan/238/orig -> origin/gh/XuehaiPan/238/orig 2025-03-17T17:41:35.8784106Z * [new branch] gh/XuehaiPan/239/base -> origin/gh/XuehaiPan/239/base 2025-03-17T17:41:35.8785002Z * [new branch] gh/XuehaiPan/239/head -> origin/gh/XuehaiPan/239/head 2025-03-17T17:41:35.8785950Z * [new branch] gh/XuehaiPan/239/orig -> origin/gh/XuehaiPan/239/orig 2025-03-17T17:41:35.8787449Z * [new branch] gh/XuehaiPan/240/base -> origin/gh/XuehaiPan/240/base 2025-03-17T17:41:35.8788315Z * [new branch] gh/XuehaiPan/240/head -> origin/gh/XuehaiPan/240/head 2025-03-17T17:41:35.8789292Z * [new branch] gh/XuehaiPan/240/orig -> origin/gh/XuehaiPan/240/orig 2025-03-17T17:41:35.8790684Z * [new branch] gh/XuehaiPan/241/base -> origin/gh/XuehaiPan/241/base 2025-03-17T17:41:35.8791604Z * [new branch] gh/XuehaiPan/241/head -> origin/gh/XuehaiPan/241/head 2025-03-17T17:41:35.8792672Z * [new branch] gh/XuehaiPan/241/orig -> origin/gh/XuehaiPan/241/orig 2025-03-17T17:41:35.8794103Z * [new branch] gh/XuehaiPan/242/base -> origin/gh/XuehaiPan/242/base 2025-03-17T17:41:35.8794968Z * [new branch] gh/XuehaiPan/242/head -> origin/gh/XuehaiPan/242/head 2025-03-17T17:41:35.8795933Z * [new branch] gh/XuehaiPan/242/orig -> origin/gh/XuehaiPan/242/orig 2025-03-17T17:41:35.8797353Z * [new branch] gh/XuehaiPan/243/base -> origin/gh/XuehaiPan/243/base 2025-03-17T17:41:35.8798234Z * [new branch] gh/XuehaiPan/243/head -> origin/gh/XuehaiPan/243/head 2025-03-17T17:41:35.8800001Z * [new branch] gh/XuehaiPan/243/orig -> origin/gh/XuehaiPan/243/orig 2025-03-17T17:41:35.8801150Z * [new branch] gh/XuehaiPan/244/base -> origin/gh/XuehaiPan/244/base 2025-03-17T17:41:35.8802081Z * [new branch] gh/XuehaiPan/244/head -> origin/gh/XuehaiPan/244/head 2025-03-17T17:41:35.8803149Z * [new branch] gh/XuehaiPan/244/orig -> origin/gh/XuehaiPan/244/orig 2025-03-17T17:41:35.8804178Z * [new branch] gh/XuehaiPan/245/base -> origin/gh/XuehaiPan/245/base 2025-03-17T17:41:35.8805165Z * [new branch] gh/XuehaiPan/245/head -> origin/gh/XuehaiPan/245/head 2025-03-17T17:41:35.8806394Z * [new branch] gh/XuehaiPan/245/orig -> origin/gh/XuehaiPan/245/orig 2025-03-17T17:41:35.8807509Z * [new branch] gh/XuehaiPan/246/base -> origin/gh/XuehaiPan/246/base 2025-03-17T17:41:35.8808646Z * [new branch] gh/XuehaiPan/246/head -> origin/gh/XuehaiPan/246/head 2025-03-17T17:41:35.8809634Z * [new branch] gh/XuehaiPan/246/orig -> origin/gh/XuehaiPan/246/orig 2025-03-17T17:41:35.8810829Z * [new branch] gh/XuehaiPan/247/base -> origin/gh/XuehaiPan/247/base 2025-03-17T17:41:35.8811823Z * [new branch] gh/XuehaiPan/247/head -> origin/gh/XuehaiPan/247/head 2025-03-17T17:41:35.8812932Z * [new branch] gh/XuehaiPan/247/orig -> origin/gh/XuehaiPan/247/orig 2025-03-17T17:41:35.8814100Z * [new branch] gh/XuehaiPan/248/base -> origin/gh/XuehaiPan/248/base 2025-03-17T17:41:35.8815085Z * [new branch] gh/XuehaiPan/248/head -> origin/gh/XuehaiPan/248/head 2025-03-17T17:41:35.8816075Z * [new branch] gh/XuehaiPan/248/orig -> origin/gh/XuehaiPan/248/orig 2025-03-17T17:41:35.8817340Z * [new branch] gh/XuehaiPan/249/base -> origin/gh/XuehaiPan/249/base 2025-03-17T17:41:35.8818313Z * [new branch] gh/XuehaiPan/249/head -> origin/gh/XuehaiPan/249/head 2025-03-17T17:41:35.8819362Z * [new branch] gh/XuehaiPan/249/orig -> origin/gh/XuehaiPan/249/orig 2025-03-17T17:41:35.8820726Z * [new branch] gh/XuehaiPan/250/base -> origin/gh/XuehaiPan/250/base 2025-03-17T17:41:35.8821689Z * [new branch] gh/XuehaiPan/250/head -> origin/gh/XuehaiPan/250/head 2025-03-17T17:41:35.8822665Z * [new branch] gh/XuehaiPan/250/orig -> origin/gh/XuehaiPan/250/orig 2025-03-17T17:41:35.8824045Z * [new branch] gh/XuehaiPan/251/base -> origin/gh/XuehaiPan/251/base 2025-03-17T17:41:35.8825108Z * [new branch] gh/XuehaiPan/251/head -> origin/gh/XuehaiPan/251/head 2025-03-17T17:41:35.8826137Z * [new branch] gh/XuehaiPan/251/orig -> origin/gh/XuehaiPan/251/orig 2025-03-17T17:41:35.8827427Z * [new branch] gh/XuehaiPan/252/base -> origin/gh/XuehaiPan/252/base 2025-03-17T17:41:35.8828528Z * [new branch] gh/XuehaiPan/252/head -> origin/gh/XuehaiPan/252/head 2025-03-17T17:41:35.8829487Z * [new branch] gh/XuehaiPan/252/orig -> origin/gh/XuehaiPan/252/orig 2025-03-17T17:41:35.8830659Z * [new branch] gh/XuehaiPan/253/base -> origin/gh/XuehaiPan/253/base 2025-03-17T17:41:35.8831616Z * [new branch] gh/XuehaiPan/253/head -> origin/gh/XuehaiPan/253/head 2025-03-17T17:41:35.8832780Z * [new branch] gh/XuehaiPan/253/orig -> origin/gh/XuehaiPan/253/orig 2025-03-17T17:41:35.8834091Z * [new branch] gh/XuehaiPan/254/base -> origin/gh/XuehaiPan/254/base 2025-03-17T17:41:35.8835062Z * [new branch] gh/XuehaiPan/254/head -> origin/gh/XuehaiPan/254/head 2025-03-17T17:41:35.8836166Z * [new branch] gh/XuehaiPan/254/orig -> origin/gh/XuehaiPan/254/orig 2025-03-17T17:41:35.8837598Z * [new branch] gh/XuehaiPan/255/base -> origin/gh/XuehaiPan/255/base 2025-03-17T17:41:35.8840823Z * [new branch] gh/XuehaiPan/255/head -> origin/gh/XuehaiPan/255/head 2025-03-17T17:41:35.8841889Z * [new branch] gh/XuehaiPan/255/orig -> origin/gh/XuehaiPan/255/orig 2025-03-17T17:41:35.8843139Z * [new branch] gh/XuehaiPan/256/base -> origin/gh/XuehaiPan/256/base 2025-03-17T17:41:35.8844117Z * [new branch] gh/XuehaiPan/256/head -> origin/gh/XuehaiPan/256/head 2025-03-17T17:41:35.8845111Z * [new branch] gh/XuehaiPan/256/orig -> origin/gh/XuehaiPan/256/orig 2025-03-17T17:41:35.8846514Z * [new branch] gh/XuehaiPan/257/base -> origin/gh/XuehaiPan/257/base 2025-03-17T17:41:35.8847574Z * [new branch] gh/XuehaiPan/257/head -> origin/gh/XuehaiPan/257/head 2025-03-17T17:41:35.8848593Z * [new branch] gh/XuehaiPan/257/orig -> origin/gh/XuehaiPan/257/orig 2025-03-17T17:41:35.8849768Z * [new branch] gh/XuehaiPan/258/base -> origin/gh/XuehaiPan/258/base 2025-03-17T17:41:35.8850822Z * [new branch] gh/XuehaiPan/258/head -> origin/gh/XuehaiPan/258/head 2025-03-17T17:41:35.8851796Z * [new branch] gh/XuehaiPan/258/orig -> origin/gh/XuehaiPan/258/orig 2025-03-17T17:41:35.8852995Z * [new branch] gh/XuehaiPan/259/base -> origin/gh/XuehaiPan/259/base 2025-03-17T17:41:35.8854204Z * [new branch] gh/XuehaiPan/259/head -> origin/gh/XuehaiPan/259/head 2025-03-17T17:41:35.8855129Z * [new branch] gh/XuehaiPan/259/orig -> origin/gh/XuehaiPan/259/orig 2025-03-17T17:41:35.8856410Z * [new branch] gh/XuehaiPan/260/base -> origin/gh/XuehaiPan/260/base 2025-03-17T17:41:35.8857415Z * [new branch] gh/XuehaiPan/260/head -> origin/gh/XuehaiPan/260/head 2025-03-17T17:41:35.8858407Z * [new branch] gh/XuehaiPan/260/orig -> origin/gh/XuehaiPan/260/orig 2025-03-17T17:41:35.8859694Z * [new branch] gh/XuehaiPan/261/base -> origin/gh/XuehaiPan/261/base 2025-03-17T17:41:35.8860677Z * [new branch] gh/XuehaiPan/261/head -> origin/gh/XuehaiPan/261/head 2025-03-17T17:41:35.8861649Z * [new branch] gh/XuehaiPan/261/orig -> origin/gh/XuehaiPan/261/orig 2025-03-17T17:41:35.8863008Z * [new branch] gh/XuehaiPan/30/base -> origin/gh/XuehaiPan/30/base 2025-03-17T17:41:35.8863978Z * [new branch] gh/XuehaiPan/30/head -> origin/gh/XuehaiPan/30/head 2025-03-17T17:41:35.8864956Z * [new branch] gh/XuehaiPan/30/orig -> origin/gh/XuehaiPan/30/orig 2025-03-17T17:41:35.8866383Z * [new branch] gh/XuehaiPan/72/base -> origin/gh/XuehaiPan/72/base 2025-03-17T17:41:35.8867542Z * [new branch] gh/XuehaiPan/72/head -> origin/gh/XuehaiPan/72/head 2025-03-17T17:41:35.8868607Z * [new branch] gh/XuehaiPan/72/orig -> origin/gh/XuehaiPan/72/orig 2025-03-17T17:41:35.8869780Z * [new branch] gh/XuehaiPan/9/base -> origin/gh/XuehaiPan/9/base 2025-03-17T17:41:35.8870714Z * [new branch] gh/XuehaiPan/9/orig -> origin/gh/XuehaiPan/9/orig 2025-03-17T17:41:35.8872071Z * [new branch] gh/XuehaiPan/97/base -> origin/gh/XuehaiPan/97/base 2025-03-17T17:41:35.8873109Z * [new branch] gh/XuehaiPan/97/head -> origin/gh/XuehaiPan/97/head 2025-03-17T17:41:35.8874073Z * [new branch] gh/XuehaiPan/97/orig -> origin/gh/XuehaiPan/97/orig 2025-03-17T17:41:35.8875439Z * [new branch] gh/XuehaiPan/98/base -> origin/gh/XuehaiPan/98/base 2025-03-17T17:41:35.8876415Z * [new branch] gh/XuehaiPan/98/head -> origin/gh/XuehaiPan/98/head 2025-03-17T17:41:35.8877351Z * [new branch] gh/XuehaiPan/98/orig -> origin/gh/XuehaiPan/98/orig 2025-03-17T17:41:35.8878739Z * [new branch] gh/XuehaiPan/99/base -> origin/gh/XuehaiPan/99/base 2025-03-17T17:41:35.8879687Z * [new branch] gh/XuehaiPan/99/head -> origin/gh/XuehaiPan/99/head 2025-03-17T17:41:35.8880667Z * [new branch] gh/XuehaiPan/99/orig -> origin/gh/XuehaiPan/99/orig 2025-03-17T17:41:35.8882473Z * [new branch] gh/ZhiweiYan-96/23/base -> origin/gh/ZhiweiYan-96/23/base 2025-03-17T17:41:35.8883635Z * [new branch] gh/ZhiweiYan-96/23/head -> origin/gh/ZhiweiYan-96/23/head 2025-03-17T17:41:35.8884594Z * [new branch] gh/ZhiweiYan-96/23/orig -> origin/gh/ZhiweiYan-96/23/orig 2025-03-17T17:41:35.8885889Z * [new branch] gh/ZhiweiYan-96/27/base -> origin/gh/ZhiweiYan-96/27/base 2025-03-17T17:41:35.8886876Z * [new branch] gh/ZhiweiYan-96/27/head -> origin/gh/ZhiweiYan-96/27/head 2025-03-17T17:41:35.8887872Z * [new branch] gh/ZhiweiYan-96/27/orig -> origin/gh/ZhiweiYan-96/27/orig 2025-03-17T17:41:35.8889108Z * [new branch] gh/ZhiweiYan-96/29/base -> origin/gh/ZhiweiYan-96/29/base 2025-03-17T17:41:35.8890096Z * [new branch] gh/ZhiweiYan-96/29/head -> origin/gh/ZhiweiYan-96/29/head 2025-03-17T17:41:35.8891078Z * [new branch] gh/ZhiweiYan-96/29/orig -> origin/gh/ZhiweiYan-96/29/orig 2025-03-17T17:41:35.8892571Z * [new branch] gh/ZhiweiYan-96/30/base -> origin/gh/ZhiweiYan-96/30/base 2025-03-17T17:41:35.8893514Z * [new branch] gh/ZhiweiYan-96/30/head -> origin/gh/ZhiweiYan-96/30/head 2025-03-17T17:41:35.8894565Z * [new branch] gh/ZhiweiYan-96/30/orig -> origin/gh/ZhiweiYan-96/30/orig 2025-03-17T17:41:35.8895703Z * [new branch] gh/ZhiweiYan-96/31/base -> origin/gh/ZhiweiYan-96/31/base 2025-03-17T17:41:35.8896675Z * [new branch] gh/ZhiweiYan-96/31/head -> origin/gh/ZhiweiYan-96/31/head 2025-03-17T17:41:35.8897931Z * [new branch] gh/ZhiweiYan-96/31/orig -> origin/gh/ZhiweiYan-96/31/orig 2025-03-17T17:41:35.8899235Z * [new branch] gh/ZhiweiYan-96/32/base -> origin/gh/ZhiweiYan-96/32/base 2025-03-17T17:41:35.8900207Z * [new branch] gh/ZhiweiYan-96/32/head -> origin/gh/ZhiweiYan-96/32/head 2025-03-17T17:41:35.8901184Z * [new branch] gh/ZhiweiYan-96/32/orig -> origin/gh/ZhiweiYan-96/32/orig 2025-03-17T17:41:35.8902527Z * [new branch] gh/ZhiweiYan-96/33/base -> origin/gh/ZhiweiYan-96/33/base 2025-03-17T17:41:35.8903656Z * [new branch] gh/ZhiweiYan-96/33/head -> origin/gh/ZhiweiYan-96/33/head 2025-03-17T17:41:35.8904617Z * [new branch] gh/ZhiweiYan-96/33/orig -> origin/gh/ZhiweiYan-96/33/orig 2025-03-17T17:41:35.8905743Z * [new branch] gh/ZhiweiYan-96/38/base -> origin/gh/ZhiweiYan-96/38/base 2025-03-17T17:41:35.8906865Z * [new branch] gh/ZhiweiYan-96/38/head -> origin/gh/ZhiweiYan-96/38/head 2025-03-17T17:41:35.8907845Z * [new branch] gh/ZhiweiYan-96/38/orig -> origin/gh/ZhiweiYan-96/38/orig 2025-03-17T17:41:35.8909118Z * [new branch] gh/ZhiweiYan-96/39/base -> origin/gh/ZhiweiYan-96/39/base 2025-03-17T17:41:35.8910121Z * [new branch] gh/ZhiweiYan-96/39/head -> origin/gh/ZhiweiYan-96/39/head 2025-03-17T17:41:35.8911205Z * [new branch] gh/ZhiweiYan-96/39/orig -> origin/gh/ZhiweiYan-96/39/orig 2025-03-17T17:41:35.8912324Z * [new branch] gh/ZhiweiYan-96/40/base -> origin/gh/ZhiweiYan-96/40/base 2025-03-17T17:41:35.8913432Z * [new branch] gh/ZhiweiYan-96/40/head -> origin/gh/ZhiweiYan-96/40/head 2025-03-17T17:41:35.8914432Z * [new branch] gh/ZhiweiYan-96/40/orig -> origin/gh/ZhiweiYan-96/40/orig 2025-03-17T17:41:35.8915616Z * [new branch] gh/ZhiweiYan-96/41/base -> origin/gh/ZhiweiYan-96/41/base 2025-03-17T17:41:35.8916589Z * [new branch] gh/ZhiweiYan-96/41/head -> origin/gh/ZhiweiYan-96/41/head 2025-03-17T17:41:35.8917561Z * [new branch] gh/ZhiweiYan-96/41/orig -> origin/gh/ZhiweiYan-96/41/orig 2025-03-17T17:41:35.8918918Z * [new branch] gh/ZhiweiYan-96/42/base -> origin/gh/ZhiweiYan-96/42/base 2025-03-17T17:41:35.8919900Z * [new branch] gh/ZhiweiYan-96/42/head -> origin/gh/ZhiweiYan-96/42/head 2025-03-17T17:41:35.8920996Z * [new branch] gh/ZhiweiYan-96/42/orig -> origin/gh/ZhiweiYan-96/42/orig 2025-03-17T17:41:35.8922032Z * [new branch] gh/ZhiweiYan-96/43/base -> origin/gh/ZhiweiYan-96/43/base 2025-03-17T17:41:35.8923012Z * [new branch] gh/ZhiweiYan-96/43/head -> origin/gh/ZhiweiYan-96/43/head 2025-03-17T17:41:35.8924017Z * [new branch] gh/ZhiweiYan-96/43/orig -> origin/gh/ZhiweiYan-96/43/orig 2025-03-17T17:41:35.8925500Z * [new branch] gh/ZhiweiYan-96/44/base -> origin/gh/ZhiweiYan-96/44/base 2025-03-17T17:41:35.8926507Z * [new branch] gh/ZhiweiYan-96/44/head -> origin/gh/ZhiweiYan-96/44/head 2025-03-17T17:41:35.8927762Z * [new branch] gh/ZhiweiYan-96/45/base -> origin/gh/ZhiweiYan-96/45/base 2025-03-17T17:41:35.8928797Z * [new branch] gh/ZhiweiYan-96/45/head -> origin/gh/ZhiweiYan-96/45/head 2025-03-17T17:41:35.8930100Z * [new branch] gh/ZhiweiYan-96/46/base -> origin/gh/ZhiweiYan-96/46/base 2025-03-17T17:41:35.8931067Z * [new branch] gh/ZhiweiYan-96/46/head -> origin/gh/ZhiweiYan-96/46/head 2025-03-17T17:41:35.8932097Z * [new branch] gh/ZhiweiYan-96/46/orig -> origin/gh/ZhiweiYan-96/46/orig 2025-03-17T17:41:35.8933284Z * [new branch] gh/ZhiweiYan-96/47/base -> origin/gh/ZhiweiYan-96/47/base 2025-03-17T17:41:35.8934276Z * [new branch] gh/ZhiweiYan-96/47/head -> origin/gh/ZhiweiYan-96/47/head 2025-03-17T17:41:35.8935399Z * [new branch] gh/ZhiweiYan-96/47/orig -> origin/gh/ZhiweiYan-96/47/orig 2025-03-17T17:41:35.8936684Z * [new branch] gh/ZhiweiYan-96/48/base -> origin/gh/ZhiweiYan-96/48/base 2025-03-17T17:41:35.8937880Z * [new branch] gh/ZhiweiYan-96/48/head -> origin/gh/ZhiweiYan-96/48/head 2025-03-17T17:41:35.8938849Z * [new branch] gh/ZhiweiYan-96/48/orig -> origin/gh/ZhiweiYan-96/48/orig 2025-03-17T17:41:35.8940716Z * [new branch] gh/ZhiweiYan-96/49/base -> origin/gh/ZhiweiYan-96/49/base 2025-03-17T17:41:35.8941764Z * [new branch] gh/ZhiweiYan-96/49/head -> origin/gh/ZhiweiYan-96/49/head 2025-03-17T17:41:35.8942991Z * [new branch] gh/ZhiweiYan-96/50/base -> origin/gh/ZhiweiYan-96/50/base 2025-03-17T17:41:35.8943990Z * [new branch] gh/ZhiweiYan-96/50/head -> origin/gh/ZhiweiYan-96/50/head 2025-03-17T17:41:35.8945101Z * [new branch] gh/ZhiweiYan-96/50/orig -> origin/gh/ZhiweiYan-96/50/orig 2025-03-17T17:41:35.8946373Z * [new branch] gh/ZhiweiYan-96/51/base -> origin/gh/ZhiweiYan-96/51/base 2025-03-17T17:41:35.8948197Z * [new branch] gh/ZhiweiYan-96/51/head -> origin/gh/ZhiweiYan-96/51/head 2025-03-17T17:41:35.8949221Z * [new branch] gh/ZhiweiYan-96/51/orig -> origin/gh/ZhiweiYan-96/51/orig 2025-03-17T17:41:35.8950390Z * [new branch] gh/ZhiweiYan-96/52/base -> origin/gh/ZhiweiYan-96/52/base 2025-03-17T17:41:35.8951547Z * [new branch] gh/ZhiweiYan-96/52/head -> origin/gh/ZhiweiYan-96/52/head 2025-03-17T17:41:35.8952582Z * [new branch] gh/ZhiweiYan-96/52/orig -> origin/gh/ZhiweiYan-96/52/orig 2025-03-17T17:41:35.8953650Z * [new branch] gh/ZhiweiYan-96/53/base -> origin/gh/ZhiweiYan-96/53/base 2025-03-17T17:41:35.8954646Z * [new branch] gh/ZhiweiYan-96/53/head -> origin/gh/ZhiweiYan-96/53/head 2025-03-17T17:41:35.8955633Z * [new branch] gh/ZhiweiYan-96/53/orig -> origin/gh/ZhiweiYan-96/53/orig 2025-03-17T17:41:35.8956884Z * [new branch] gh/ZhiweiYan-96/54/base -> origin/gh/ZhiweiYan-96/54/base 2025-03-17T17:41:35.8957864Z * [new branch] gh/ZhiweiYan-96/54/head -> origin/gh/ZhiweiYan-96/54/head 2025-03-17T17:41:35.8958958Z * [new branch] gh/ZhiweiYan-96/54/orig -> origin/gh/ZhiweiYan-96/54/orig 2025-03-17T17:41:35.8960428Z * [new branch] gh/aakhundov/1/base -> origin/gh/aakhundov/1/base 2025-03-17T17:41:35.8961447Z * [new branch] gh/aakhundov/1/head -> origin/gh/aakhundov/1/head 2025-03-17T17:41:35.8962598Z * [new branch] gh/aakhundov/2/base -> origin/gh/aakhundov/2/base 2025-03-17T17:41:35.8963633Z * [new branch] gh/aakhundov/2/head -> origin/gh/aakhundov/2/head 2025-03-17T17:41:35.8964910Z * [new branch] gh/aditew01/openblas -> origin/gh/aditew01/openblas 2025-03-17T17:41:35.8965866Z * [new branch] gh/aditew01/sbgemm -> origin/gh/aditew01/sbgemm 2025-03-17T17:41:35.8966927Z * [new branch] gh/aditew01/vecbf16 -> origin/gh/aditew01/vecbf16 2025-03-17T17:41:35.8968268Z * [new branch] gh/albanD/3/base -> origin/gh/albanD/3/base 2025-03-17T17:41:35.8969246Z * [new branch] gh/albanD/3/head -> origin/gh/albanD/3/head 2025-03-17T17:41:35.8970392Z * [new branch] gh/albanD/3/orig -> origin/gh/albanD/3/orig 2025-03-17T17:41:35.8971849Z * [new branch] gh/alexbrauckmann/paddedtensor_init -> origin/gh/alexbrauckmann/paddedtensor_init 2025-03-17T17:41:35.8973176Z * [new branch] gh/alexsamardzic/25/base -> origin/gh/alexsamardzic/25/base 2025-03-17T17:41:35.8974318Z * [new branch] gh/alexsamardzic/25/head -> origin/gh/alexsamardzic/25/head 2025-03-17T17:41:35.8975399Z * [new branch] gh/alexsamardzic/25/orig -> origin/gh/alexsamardzic/25/orig 2025-03-17T17:41:35.8976476Z * [new branch] gh/alexsamardzic/26/base -> origin/gh/alexsamardzic/26/base 2025-03-17T17:41:35.8977565Z * [new branch] gh/alexsamardzic/26/head -> origin/gh/alexsamardzic/26/head 2025-03-17T17:41:35.8978671Z * [new branch] gh/alexsamardzic/26/orig -> origin/gh/alexsamardzic/26/orig 2025-03-17T17:41:35.8980021Z * [new branch] gh/amjames/18/base -> origin/gh/amjames/18/base 2025-03-17T17:41:35.8980964Z * [new branch] gh/amjames/18/head -> origin/gh/amjames/18/head 2025-03-17T17:41:35.8981886Z * [new branch] gh/amjames/18/orig -> origin/gh/amjames/18/orig 2025-03-17T17:41:35.8983179Z * [new branch] gh/amjames/19/base -> origin/gh/amjames/19/base 2025-03-17T17:41:35.8984300Z * [new branch] gh/amjames/19/head -> origin/gh/amjames/19/head 2025-03-17T17:41:35.8985376Z * [new branch] gh/amjames/19/orig -> origin/gh/amjames/19/orig 2025-03-17T17:41:35.8987183Z * [new branch] gh/amjames/20/base -> origin/gh/amjames/20/base 2025-03-17T17:41:35.8988244Z * [new branch] gh/amjames/20/head -> origin/gh/amjames/20/head 2025-03-17T17:41:35.8989515Z * [new branch] gh/amjames/20/orig -> origin/gh/amjames/20/orig 2025-03-17T17:41:35.8990947Z * [new branch] gh/amjames/21/base -> origin/gh/amjames/21/base 2025-03-17T17:41:35.8992066Z * [new branch] gh/amjames/21/head -> origin/gh/amjames/21/head 2025-03-17T17:41:35.8993020Z * [new branch] gh/amjames/21/orig -> origin/gh/amjames/21/orig 2025-03-17T17:41:35.8994795Z * [new branch] gh/andrewlee302/1/base -> origin/gh/andrewlee302/1/base 2025-03-17T17:41:35.8995832Z * [new branch] gh/andrewlee302/1/head -> origin/gh/andrewlee302/1/head 2025-03-17T17:41:35.8997211Z * [new branch] gh/andrewlee302/3/base -> origin/gh/andrewlee302/3/base 2025-03-17T17:41:35.8998155Z * [new branch] gh/andrewlee302/3/head -> origin/gh/andrewlee302/3/head 2025-03-17T17:41:35.8999301Z * [new branch] gh/andrewlee302/3/orig -> origin/gh/andrewlee302/3/orig 2025-03-17T17:41:35.9001072Z * [new branch] gh/andrewor14/35/base -> origin/gh/andrewor14/35/base 2025-03-17T17:41:35.9002066Z * [new branch] gh/andrewor14/35/head -> origin/gh/andrewor14/35/head 2025-03-17T17:41:35.9003005Z * [new branch] gh/andrewor14/35/orig -> origin/gh/andrewor14/35/orig 2025-03-17T17:41:35.9004465Z * [new branch] gh/andrewor14/36/base -> origin/gh/andrewor14/36/base 2025-03-17T17:41:35.9005580Z * [new branch] gh/andrewor14/36/head -> origin/gh/andrewor14/36/head 2025-03-17T17:41:35.9006607Z * [new branch] gh/andrewor14/36/orig -> origin/gh/andrewor14/36/orig 2025-03-17T17:41:35.9007919Z * [new branch] gh/andrewor14/37/base -> origin/gh/andrewor14/37/base 2025-03-17T17:41:35.9008978Z * [new branch] gh/andrewor14/37/head -> origin/gh/andrewor14/37/head 2025-03-17T17:41:35.9009924Z * [new branch] gh/andrewor14/37/orig -> origin/gh/andrewor14/37/orig 2025-03-17T17:41:35.9011349Z * [new branch] gh/andrewor14/50/base -> origin/gh/andrewor14/50/base 2025-03-17T17:41:35.9012753Z * [new branch] gh/andrewor14/50/head -> origin/gh/andrewor14/50/head 2025-03-17T17:41:35.9013758Z * [new branch] gh/andrewor14/50/orig -> origin/gh/andrewor14/50/orig 2025-03-17T17:41:35.9015065Z * [new branch] gh/angelayi/64/base -> origin/gh/angelayi/64/base 2025-03-17T17:41:35.9016086Z * [new branch] gh/angelayi/64/head -> origin/gh/angelayi/64/head 2025-03-17T17:41:35.9017108Z * [new branch] gh/angelayi/64/orig -> origin/gh/angelayi/64/orig 2025-03-17T17:41:35.9019337Z * [new branch] gh/angelayi/65/base -> origin/gh/angelayi/65/base 2025-03-17T17:41:35.9020393Z * [new branch] gh/angelayi/65/head -> origin/gh/angelayi/65/head 2025-03-17T17:41:35.9021348Z * [new branch] gh/angelayi/65/orig -> origin/gh/angelayi/65/orig 2025-03-17T17:41:35.9022644Z * [new branch] gh/angelayi/66/base -> origin/gh/angelayi/66/base 2025-03-17T17:41:35.9023656Z * [new branch] gh/angelayi/66/head -> origin/gh/angelayi/66/head 2025-03-17T17:41:35.9024600Z * [new branch] gh/angelayi/66/orig -> origin/gh/angelayi/66/orig 2025-03-17T17:41:35.9026262Z * [new branch] gh/angelayi/67/base -> origin/gh/angelayi/67/base 2025-03-17T17:41:35.9027400Z * [new branch] gh/angelayi/67/head -> origin/gh/angelayi/67/head 2025-03-17T17:41:35.9028289Z * [new branch] gh/angelayi/67/orig -> origin/gh/angelayi/67/orig 2025-03-17T17:41:35.9029790Z * [new branch] gh/angelayi/68/base -> origin/gh/angelayi/68/base 2025-03-17T17:41:35.9030731Z * [new branch] gh/angelayi/68/head -> origin/gh/angelayi/68/head 2025-03-17T17:41:35.9031672Z * [new branch] gh/angelayi/68/orig -> origin/gh/angelayi/68/orig 2025-03-17T17:41:35.9033129Z * [new branch] gh/angelayi/69/base -> origin/gh/angelayi/69/base 2025-03-17T17:41:35.9034223Z * [new branch] gh/angelayi/69/head -> origin/gh/angelayi/69/head 2025-03-17T17:41:35.9035225Z * [new branch] gh/angelayi/69/orig -> origin/gh/angelayi/69/orig 2025-03-17T17:41:35.9036517Z * [new branch] gh/angelayi/70/base -> origin/gh/angelayi/70/base 2025-03-17T17:41:35.9037615Z * [new branch] gh/angelayi/70/head -> origin/gh/angelayi/70/head 2025-03-17T17:41:35.9038597Z * [new branch] gh/angelayi/70/orig -> origin/gh/angelayi/70/orig 2025-03-17T17:41:35.9040237Z * [new branch] gh/angelayi/71/base -> origin/gh/angelayi/71/base 2025-03-17T17:41:35.9041192Z * [new branch] gh/angelayi/71/head -> origin/gh/angelayi/71/head 2025-03-17T17:41:35.9042139Z * [new branch] gh/angelayi/71/orig -> origin/gh/angelayi/71/orig 2025-03-17T17:41:35.9043476Z * [new branch] gh/angelayi/72/base -> origin/gh/angelayi/72/base 2025-03-17T17:41:35.9044526Z * [new branch] gh/angelayi/72/head -> origin/gh/angelayi/72/head 2025-03-17T17:41:35.9045475Z * [new branch] gh/angelayi/72/orig -> origin/gh/angelayi/72/orig 2025-03-17T17:41:35.9046634Z * [new branch] gh/angelayi/73/base -> origin/gh/angelayi/73/base 2025-03-17T17:41:35.9047573Z * [new branch] gh/angelayi/73/head -> origin/gh/angelayi/73/head 2025-03-17T17:41:35.9048593Z * [new branch] gh/angelayi/73/orig -> origin/gh/angelayi/73/orig 2025-03-17T17:41:35.9049931Z * [new branch] gh/angelayi/74/base -> origin/gh/angelayi/74/base 2025-03-17T17:41:35.9050874Z * [new branch] gh/angelayi/74/head -> origin/gh/angelayi/74/head 2025-03-17T17:41:35.9051879Z * [new branch] gh/angelayi/74/orig -> origin/gh/angelayi/74/orig 2025-03-17T17:41:35.9053429Z * [new branch] gh/angelayi/75/base -> origin/gh/angelayi/75/base 2025-03-17T17:41:35.9054378Z * [new branch] gh/angelayi/75/head -> origin/gh/angelayi/75/head 2025-03-17T17:41:35.9055273Z * [new branch] gh/angelayi/75/orig -> origin/gh/angelayi/75/orig 2025-03-17T17:41:35.9056575Z * [new branch] gh/angelayi/76/base -> origin/gh/angelayi/76/base 2025-03-17T17:41:35.9057524Z * [new branch] gh/angelayi/76/head -> origin/gh/angelayi/76/head 2025-03-17T17:41:35.9058868Z * [new branch] gh/angelayi/76/orig -> origin/gh/angelayi/76/orig 2025-03-17T17:41:35.9059802Z * [new branch] gh/angelayi/77/base -> origin/gh/angelayi/77/base 2025-03-17T17:41:35.9060779Z * [new branch] gh/angelayi/77/head -> origin/gh/angelayi/77/head 2025-03-17T17:41:35.9061855Z * [new branch] gh/angelayi/77/orig -> origin/gh/angelayi/77/orig 2025-03-17T17:41:35.9063220Z * [new branch] gh/angelayi/78/base -> origin/gh/angelayi/78/base 2025-03-17T17:41:35.9064184Z * [new branch] gh/angelayi/78/head -> origin/gh/angelayi/78/head 2025-03-17T17:41:35.9065097Z * [new branch] gh/angelayi/78/orig -> origin/gh/angelayi/78/orig 2025-03-17T17:41:35.9083888Z * [new branch] gh/angelayi/79/base -> origin/gh/angelayi/79/base 2025-03-17T17:41:35.9084907Z * [new branch] gh/angelayi/79/head -> origin/gh/angelayi/79/head 2025-03-17T17:41:35.9085856Z * [new branch] gh/angelayi/79/orig -> origin/gh/angelayi/79/orig 2025-03-17T17:41:35.9086703Z * [new branch] gh/angelayi/80/base -> origin/gh/angelayi/80/base 2025-03-17T17:41:35.9087562Z * [new branch] gh/angelayi/80/head -> origin/gh/angelayi/80/head 2025-03-17T17:41:35.9088127Z * [new branch] gh/angelayi/80/orig -> origin/gh/angelayi/80/orig 2025-03-17T17:41:35.9089242Z * [new branch] gh/angelayi/81/base -> origin/gh/angelayi/81/base 2025-03-17T17:41:35.9090215Z * [new branch] gh/angelayi/81/head -> origin/gh/angelayi/81/head 2025-03-17T17:41:35.9091060Z * [new branch] gh/angelayi/81/orig -> origin/gh/angelayi/81/orig 2025-03-17T17:41:35.9091785Z * [new branch] gh/anijain2305/162/base -> origin/gh/anijain2305/162/base 2025-03-17T17:41:35.9092774Z * [new branch] gh/anijain2305/162/head -> origin/gh/anijain2305/162/head 2025-03-17T17:41:35.9093676Z * [new branch] gh/anijain2305/541/head -> origin/gh/anijain2305/541/head 2025-03-17T17:41:35.9094578Z * [new branch] gh/anijain2305/566/base -> origin/gh/anijain2305/566/base 2025-03-17T17:41:35.9095403Z * [new branch] gh/anijain2305/566/head -> origin/gh/anijain2305/566/head 2025-03-17T17:41:35.9096269Z * [new branch] gh/anijain2305/566/orig -> origin/gh/anijain2305/566/orig 2025-03-17T17:41:35.9097193Z * [new branch] gh/anijain2305/571/base -> origin/gh/anijain2305/571/base 2025-03-17T17:41:35.9098084Z * [new branch] gh/anijain2305/571/head -> origin/gh/anijain2305/571/head 2025-03-17T17:41:35.9099041Z * [new branch] gh/anijain2305/571/orig -> origin/gh/anijain2305/571/orig 2025-03-17T17:41:35.9099936Z * [new branch] gh/anijain2305/580/base -> origin/gh/anijain2305/580/base 2025-03-17T17:41:35.9100869Z * [new branch] gh/anijain2305/580/head -> origin/gh/anijain2305/580/head 2025-03-17T17:41:35.9101762Z * [new branch] gh/anijain2305/580/orig -> origin/gh/anijain2305/580/orig 2025-03-17T17:41:35.9102660Z * [new branch] gh/anijain2305/620/base -> origin/gh/anijain2305/620/base 2025-03-17T17:41:35.9103526Z * [new branch] gh/anijain2305/620/head -> origin/gh/anijain2305/620/head 2025-03-17T17:41:35.9104587Z * [new branch] gh/anijain2305/620/orig -> origin/gh/anijain2305/620/orig 2025-03-17T17:41:35.9105531Z * [new branch] gh/anijain2305/634/base -> origin/gh/anijain2305/634/base 2025-03-17T17:41:35.9106421Z * [new branch] gh/anijain2305/634/head -> origin/gh/anijain2305/634/head 2025-03-17T17:41:35.9107302Z * [new branch] gh/anijain2305/634/orig -> origin/gh/anijain2305/634/orig 2025-03-17T17:41:35.9108220Z * [new branch] gh/anijain2305/668/base -> origin/gh/anijain2305/668/base 2025-03-17T17:41:35.9109154Z * [new branch] gh/anijain2305/668/head -> origin/gh/anijain2305/668/head 2025-03-17T17:41:35.9110063Z * [new branch] gh/anijain2305/668/orig -> origin/gh/anijain2305/668/orig 2025-03-17T17:41:35.9110849Z * [new branch] gh/anijain2305/669/base -> origin/gh/anijain2305/669/base 2025-03-17T17:41:35.9111718Z * [new branch] gh/anijain2305/669/head -> origin/gh/anijain2305/669/head 2025-03-17T17:41:35.9112689Z * [new branch] gh/anijain2305/669/orig -> origin/gh/anijain2305/669/orig 2025-03-17T17:41:35.9113691Z * [new branch] gh/anijain2305/675/base -> origin/gh/anijain2305/675/base 2025-03-17T17:41:35.9114444Z * [new branch] gh/anijain2305/675/head -> origin/gh/anijain2305/675/head 2025-03-17T17:41:35.9115361Z * [new branch] gh/anijain2305/675/orig -> origin/gh/anijain2305/675/orig 2025-03-17T17:41:35.9116332Z * [new branch] gh/anijain2305/677/base -> origin/gh/anijain2305/677/base 2025-03-17T17:41:35.9117314Z * [new branch] gh/anijain2305/677/head -> origin/gh/anijain2305/677/head 2025-03-17T17:41:35.9118196Z * [new branch] gh/anijain2305/677/orig -> origin/gh/anijain2305/677/orig 2025-03-17T17:41:35.9119196Z * [new branch] gh/anijain2305/679/base -> origin/gh/anijain2305/679/base 2025-03-17T17:41:35.9120434Z * [new branch] gh/anijain2305/679/head -> origin/gh/anijain2305/679/head 2025-03-17T17:41:35.9121164Z * [new branch] gh/anijain2305/679/orig -> origin/gh/anijain2305/679/orig 2025-03-17T17:41:35.9122098Z * [new branch] gh/anijain2305/680/base -> origin/gh/anijain2305/680/base 2025-03-17T17:41:35.9123102Z * [new branch] gh/anijain2305/680/head -> origin/gh/anijain2305/680/head 2025-03-17T17:41:35.9124104Z * [new branch] gh/anijain2305/680/orig -> origin/gh/anijain2305/680/orig 2025-03-17T17:41:35.9124722Z * [new branch] gh/anijain2305/681/base -> origin/gh/anijain2305/681/base 2025-03-17T17:41:35.9125724Z * [new branch] gh/anijain2305/681/head -> origin/gh/anijain2305/681/head 2025-03-17T17:41:35.9126631Z * [new branch] gh/anijain2305/681/orig -> origin/gh/anijain2305/681/orig 2025-03-17T17:41:35.9127540Z * [new branch] gh/anijain2305/682/base -> origin/gh/anijain2305/682/base 2025-03-17T17:41:35.9128359Z * [new branch] gh/anijain2305/682/head -> origin/gh/anijain2305/682/head 2025-03-17T17:41:35.9129364Z * [new branch] gh/anijain2305/682/orig -> origin/gh/anijain2305/682/orig 2025-03-17T17:41:35.9130368Z * [new branch] gh/anijain2305/683/base -> origin/gh/anijain2305/683/base 2025-03-17T17:41:35.9131349Z * [new branch] gh/anijain2305/683/head -> origin/gh/anijain2305/683/head 2025-03-17T17:41:35.9131984Z * [new branch] gh/anijain2305/683/orig -> origin/gh/anijain2305/683/orig 2025-03-17T17:41:35.9132997Z * [new branch] gh/anijain2305/684/base -> origin/gh/anijain2305/684/base 2025-03-17T17:41:35.9134041Z * [new branch] gh/anijain2305/684/head -> origin/gh/anijain2305/684/head 2025-03-17T17:41:35.9135018Z * [new branch] gh/anijain2305/684/orig -> origin/gh/anijain2305/684/orig 2025-03-17T17:41:35.9136727Z * [new branch] gh/anijain2305/685/base -> origin/gh/anijain2305/685/base 2025-03-17T17:41:35.9137903Z * [new branch] gh/anijain2305/685/head -> origin/gh/anijain2305/685/head 2025-03-17T17:41:35.9138886Z * [new branch] gh/anijain2305/685/orig -> origin/gh/anijain2305/685/orig 2025-03-17T17:41:35.9140299Z * [new branch] gh/anijain2305/686/base -> origin/gh/anijain2305/686/base 2025-03-17T17:41:35.9141291Z * [new branch] gh/anijain2305/686/head -> origin/gh/anijain2305/686/head 2025-03-17T17:41:35.9142278Z * [new branch] gh/anijain2305/686/orig -> origin/gh/anijain2305/686/orig 2025-03-17T17:41:35.9143498Z * [new branch] gh/anijain2305/687/base -> origin/gh/anijain2305/687/base 2025-03-17T17:41:35.9144472Z * [new branch] gh/anijain2305/687/head -> origin/gh/anijain2305/687/head 2025-03-17T17:41:35.9145493Z * [new branch] gh/anijain2305/687/orig -> origin/gh/anijain2305/687/orig 2025-03-17T17:41:35.9146953Z * [new branch] gh/anijain2305/688/base -> origin/gh/anijain2305/688/base 2025-03-17T17:41:35.9148054Z * [new branch] gh/anijain2305/688/head -> origin/gh/anijain2305/688/head 2025-03-17T17:41:35.9149403Z * [new branch] gh/anijain2305/688/orig -> origin/gh/anijain2305/688/orig 2025-03-17T17:41:35.9150975Z * [new branch] gh/anijain2305/689/base -> origin/gh/anijain2305/689/base 2025-03-17T17:41:35.9152248Z * [new branch] gh/anijain2305/689/head -> origin/gh/anijain2305/689/head 2025-03-17T17:41:35.9153541Z * [new branch] gh/anijain2305/689/orig -> origin/gh/anijain2305/689/orig 2025-03-17T17:41:35.9155198Z * [new branch] gh/anijain2305/690/base -> origin/gh/anijain2305/690/base 2025-03-17T17:41:35.9156178Z * [new branch] gh/anijain2305/690/head -> origin/gh/anijain2305/690/head 2025-03-17T17:41:35.9157517Z * [new branch] gh/anijain2305/690/orig -> origin/gh/anijain2305/690/orig 2025-03-17T17:41:35.9159421Z * [new branch] gh/anijain2305/691/base -> origin/gh/anijain2305/691/base 2025-03-17T17:41:35.9160848Z * [new branch] gh/anijain2305/691/head -> origin/gh/anijain2305/691/head 2025-03-17T17:41:35.9162169Z * [new branch] gh/anijain2305/691/orig -> origin/gh/anijain2305/691/orig 2025-03-17T17:41:35.9163828Z * [new branch] gh/anijain2305/692/base -> origin/gh/anijain2305/692/base 2025-03-17T17:41:35.9165136Z * [new branch] gh/anijain2305/692/head -> origin/gh/anijain2305/692/head 2025-03-17T17:41:35.9166435Z * [new branch] gh/anijain2305/692/orig -> origin/gh/anijain2305/692/orig 2025-03-17T17:41:35.9168186Z * [new branch] gh/anijain2305/693/base -> origin/gh/anijain2305/693/base 2025-03-17T17:41:35.9169520Z * [new branch] gh/anijain2305/693/head -> origin/gh/anijain2305/693/head 2025-03-17T17:41:35.9170830Z * [new branch] gh/anijain2305/693/orig -> origin/gh/anijain2305/693/orig 2025-03-17T17:41:35.9172590Z * [new branch] gh/anijain2305/694/base -> origin/gh/anijain2305/694/base 2025-03-17T17:41:35.9174005Z * [new branch] gh/anijain2305/694/head -> origin/gh/anijain2305/694/head 2025-03-17T17:41:35.9175323Z * [new branch] gh/anijain2305/694/orig -> origin/gh/anijain2305/694/orig 2025-03-17T17:41:35.9177017Z * [new branch] gh/anijain2305/695/base -> origin/gh/anijain2305/695/base 2025-03-17T17:41:35.9178164Z * [new branch] gh/anijain2305/695/head -> origin/gh/anijain2305/695/head 2025-03-17T17:41:35.9179556Z * [new branch] gh/anijain2305/695/orig -> origin/gh/anijain2305/695/orig 2025-03-17T17:41:35.9181100Z * [new branch] gh/anijain2305/696/base -> origin/gh/anijain2305/696/base 2025-03-17T17:41:35.9182523Z * [new branch] gh/anijain2305/696/head -> origin/gh/anijain2305/696/head 2025-03-17T17:41:35.9183703Z * [new branch] gh/anijain2305/696/orig -> origin/gh/anijain2305/696/orig 2025-03-17T17:41:35.9185519Z * [new branch] gh/anijain2305/697/base -> origin/gh/anijain2305/697/base 2025-03-17T17:41:35.9186767Z * [new branch] gh/anijain2305/697/head -> origin/gh/anijain2305/697/head 2025-03-17T17:41:35.9188271Z * [new branch] gh/anijain2305/697/orig -> origin/gh/anijain2305/697/orig 2025-03-17T17:41:35.9190171Z * [new branch] gh/anijain2305/698/base -> origin/gh/anijain2305/698/base 2025-03-17T17:41:35.9191458Z * [new branch] gh/anijain2305/698/head -> origin/gh/anijain2305/698/head 2025-03-17T17:41:35.9192769Z * [new branch] gh/anijain2305/698/orig -> origin/gh/anijain2305/698/orig 2025-03-17T17:41:35.9194442Z * [new branch] gh/anijain2305/699/base -> origin/gh/anijain2305/699/base 2025-03-17T17:41:35.9195771Z * [new branch] gh/anijain2305/699/head -> origin/gh/anijain2305/699/head 2025-03-17T17:41:35.9197140Z * [new branch] gh/anijain2305/699/orig -> origin/gh/anijain2305/699/orig 2025-03-17T17:41:35.9198706Z * [new branch] gh/anijain2305/700/base -> origin/gh/anijain2305/700/base 2025-03-17T17:41:35.9200056Z * [new branch] gh/anijain2305/700/head -> origin/gh/anijain2305/700/head 2025-03-17T17:41:35.9201390Z * [new branch] gh/anijain2305/700/orig -> origin/gh/anijain2305/700/orig 2025-03-17T17:41:35.9203127Z * [new branch] gh/anijain2305/701/base -> origin/gh/anijain2305/701/base 2025-03-17T17:41:35.9204674Z * [new branch] gh/anijain2305/701/head -> origin/gh/anijain2305/701/head 2025-03-17T17:41:35.9205961Z * [new branch] gh/anijain2305/701/orig -> origin/gh/anijain2305/701/orig 2025-03-17T17:41:35.9207530Z * [new branch] gh/anijain2305/702/base -> origin/gh/anijain2305/702/base 2025-03-17T17:41:35.9208856Z * [new branch] gh/anijain2305/702/head -> origin/gh/anijain2305/702/head 2025-03-17T17:41:35.9210149Z * [new branch] gh/anijain2305/702/orig -> origin/gh/anijain2305/702/orig 2025-03-17T17:41:35.9212174Z * [new branch] gh/anjali411/216/base -> origin/gh/anjali411/216/base 2025-03-17T17:41:35.9213501Z * [new branch] gh/anjali411/216/head -> origin/gh/anjali411/216/head 2025-03-17T17:41:35.9214856Z * [new branch] gh/anjali411/216/orig -> origin/gh/anjali411/216/orig 2025-03-17T17:41:35.9216817Z * [new branch] gh/aorenste/132/base -> origin/gh/aorenste/132/base 2025-03-17T17:41:35.9218537Z * [new branch] gh/aorenste/132/head -> origin/gh/aorenste/132/head 2025-03-17T17:41:35.9220298Z * [new branch] gh/aorenste/141/base -> origin/gh/aorenste/141/base 2025-03-17T17:41:35.9221601Z * [new branch] gh/aorenste/141/head -> origin/gh/aorenste/141/head 2025-03-17T17:41:35.9222914Z * [new branch] gh/aorenste/141/orig -> origin/gh/aorenste/141/orig 2025-03-17T17:41:35.9224683Z * [new branch] gh/aorenste/213/base -> origin/gh/aorenste/213/base 2025-03-17T17:41:35.9226771Z * [new branch] gh/aorenste/213/head -> origin/gh/aorenste/213/head 2025-03-17T17:41:35.9227779Z * [new branch] gh/aorenste/213/orig -> origin/gh/aorenste/213/orig 2025-03-17T17:41:35.9230000Z * [new branch] gh/aorenste/214/base -> origin/gh/aorenste/214/base 2025-03-17T17:41:35.9231501Z * [new branch] gh/aorenste/214/head -> origin/gh/aorenste/214/head 2025-03-17T17:41:35.9233019Z * [new branch] gh/aorenste/214/orig -> origin/gh/aorenste/214/orig 2025-03-17T17:41:35.9235134Z * [new branch] gh/aorenste/215/base -> origin/gh/aorenste/215/base 2025-03-17T17:41:35.9236271Z * [new branch] gh/aorenste/215/head -> origin/gh/aorenste/215/head 2025-03-17T17:41:35.9238219Z * [new branch] gh/aorenste/215/orig -> origin/gh/aorenste/215/orig 2025-03-17T17:41:35.9240220Z * [new branch] gh/aorenste/216/base -> origin/gh/aorenste/216/base 2025-03-17T17:41:35.9241711Z * [new branch] gh/aorenste/216/head -> origin/gh/aorenste/216/head 2025-03-17T17:41:35.9243269Z * [new branch] gh/aorenste/216/orig -> origin/gh/aorenste/216/orig 2025-03-17T17:41:35.9245191Z * [new branch] gh/aorenste/217/base -> origin/gh/aorenste/217/base 2025-03-17T17:41:35.9246552Z * [new branch] gh/aorenste/217/head -> origin/gh/aorenste/217/head 2025-03-17T17:41:35.9248122Z * [new branch] gh/aorenste/217/orig -> origin/gh/aorenste/217/orig 2025-03-17T17:41:35.9250107Z * [new branch] gh/aorenste/218/base -> origin/gh/aorenste/218/base 2025-03-17T17:41:35.9251326Z * [new branch] gh/aorenste/218/head -> origin/gh/aorenste/218/head 2025-03-17T17:41:35.9253070Z * [new branch] gh/aorenste/218/orig -> origin/gh/aorenste/218/orig 2025-03-17T17:41:35.9254985Z * [new branch] gh/aorenste/219/base -> origin/gh/aorenste/219/base 2025-03-17T17:41:35.9256617Z * [new branch] gh/aorenste/219/head -> origin/gh/aorenste/219/head 2025-03-17T17:41:35.9257723Z * [new branch] gh/aorenste/219/orig -> origin/gh/aorenste/219/orig 2025-03-17T17:41:35.9259539Z * [new branch] gh/aorenste/220/base -> origin/gh/aorenste/220/base 2025-03-17T17:41:35.9261425Z * [new branch] gh/aorenste/220/head -> origin/gh/aorenste/220/head 2025-03-17T17:41:35.9262528Z * [new branch] gh/aorenste/220/orig -> origin/gh/aorenste/220/orig 2025-03-17T17:41:35.9264384Z * [new branch] gh/aorenste/221/base -> origin/gh/aorenste/221/base 2025-03-17T17:41:35.9265685Z * [new branch] gh/aorenste/221/head -> origin/gh/aorenste/221/head 2025-03-17T17:41:35.9267075Z * [new branch] gh/aorenste/221/orig -> origin/gh/aorenste/221/orig 2025-03-17T17:41:35.9268868Z * [new branch] gh/aorenste/222/base -> origin/gh/aorenste/222/base 2025-03-17T17:41:35.9270132Z * [new branch] gh/aorenste/222/head -> origin/gh/aorenste/222/head 2025-03-17T17:41:35.9271481Z * [new branch] gh/aorenste/222/orig -> origin/gh/aorenste/222/orig 2025-03-17T17:41:35.9273453Z * [new branch] gh/avikchaudhuri/39/base -> origin/gh/avikchaudhuri/39/base 2025-03-17T17:41:35.9274752Z * [new branch] gh/avikchaudhuri/39/head -> origin/gh/avikchaudhuri/39/head 2025-03-17T17:41:35.9276078Z * [new branch] gh/avikchaudhuri/39/orig -> origin/gh/avikchaudhuri/39/orig 2025-03-17T17:41:35.9277740Z * [new branch] gh/avikchaudhuri/54/base -> origin/gh/avikchaudhuri/54/base 2025-03-17T17:41:35.9279074Z * [new branch] gh/avikchaudhuri/54/head -> origin/gh/avikchaudhuri/54/head 2025-03-17T17:41:35.9280411Z * [new branch] gh/avikchaudhuri/54/orig -> origin/gh/avikchaudhuri/54/orig 2025-03-17T17:41:35.9282142Z * [new branch] gh/avikchaudhuri/55/base -> origin/gh/avikchaudhuri/55/base 2025-03-17T17:41:35.9283449Z * [new branch] gh/avikchaudhuri/55/head -> origin/gh/avikchaudhuri/55/head 2025-03-17T17:41:35.9284747Z * [new branch] gh/avikchaudhuri/55/orig -> origin/gh/avikchaudhuri/55/orig 2025-03-17T17:41:35.9286428Z * [new branch] gh/avikchaudhuri/56/base -> origin/gh/avikchaudhuri/56/base 2025-03-17T17:41:35.9287766Z * [new branch] gh/avikchaudhuri/56/head -> origin/gh/avikchaudhuri/56/head 2025-03-17T17:41:35.9289570Z * [new branch] gh/avikchaudhuri/56/orig -> origin/gh/avikchaudhuri/56/orig 2025-03-17T17:41:35.9291270Z * [new branch] gh/bdhirsh/604/base -> origin/gh/bdhirsh/604/base 2025-03-17T17:41:35.9292632Z * [new branch] gh/bdhirsh/604/head -> origin/gh/bdhirsh/604/head 2025-03-17T17:41:35.9293669Z * [new branch] gh/bdhirsh/604/orig -> origin/gh/bdhirsh/604/orig 2025-03-17T17:41:35.9295132Z * [new branch] gh/bdhirsh/626/base -> origin/gh/bdhirsh/626/base 2025-03-17T17:41:35.9296021Z * [new branch] gh/bdhirsh/626/head -> origin/gh/bdhirsh/626/head 2025-03-17T17:41:35.9296974Z * [new branch] gh/bdhirsh/626/orig -> origin/gh/bdhirsh/626/orig 2025-03-17T17:41:35.9298610Z * [new branch] gh/bdhirsh/627/base -> origin/gh/bdhirsh/627/base 2025-03-17T17:41:35.9299656Z * [new branch] gh/bdhirsh/627/head -> origin/gh/bdhirsh/627/head 2025-03-17T17:41:35.9300685Z * [new branch] gh/bdhirsh/627/orig -> origin/gh/bdhirsh/627/orig 2025-03-17T17:41:35.9302179Z * [new branch] gh/bdhirsh/630/base -> origin/gh/bdhirsh/630/base 2025-03-17T17:41:35.9303062Z * [new branch] gh/bdhirsh/630/head -> origin/gh/bdhirsh/630/head 2025-03-17T17:41:35.9304018Z * [new branch] gh/bdhirsh/630/orig -> origin/gh/bdhirsh/630/orig 2025-03-17T17:41:35.9305620Z * [new branch] gh/bdhirsh/635/base -> origin/gh/bdhirsh/635/base 2025-03-17T17:41:35.9306529Z * [new branch] gh/bdhirsh/635/head -> origin/gh/bdhirsh/635/head 2025-03-17T17:41:35.9307605Z * [new branch] gh/bdhirsh/635/orig -> origin/gh/bdhirsh/635/orig 2025-03-17T17:41:35.9309230Z * [new branch] gh/bdhirsh/636/base -> origin/gh/bdhirsh/636/base 2025-03-17T17:41:35.9310096Z * [new branch] gh/bdhirsh/636/head -> origin/gh/bdhirsh/636/head 2025-03-17T17:41:35.9311183Z * [new branch] gh/bdhirsh/636/orig -> origin/gh/bdhirsh/636/orig 2025-03-17T17:41:35.9312727Z * [new branch] gh/bdhirsh/639/base -> origin/gh/bdhirsh/639/base 2025-03-17T17:41:35.9313700Z * [new branch] gh/bdhirsh/639/head -> origin/gh/bdhirsh/639/head 2025-03-17T17:41:35.9314681Z * [new branch] gh/bdhirsh/639/orig -> origin/gh/bdhirsh/639/orig 2025-03-17T17:41:35.9318390Z * [new branch] gh/bdhirsh/640/base -> origin/gh/bdhirsh/640/base 2025-03-17T17:41:35.9318784Z * [new branch] gh/bdhirsh/640/head -> origin/gh/bdhirsh/640/head 2025-03-17T17:41:35.9319026Z * [new branch] gh/bdhirsh/640/orig -> origin/gh/bdhirsh/640/orig 2025-03-17T17:41:35.9319469Z * [new branch] gh/bdhirsh/641/base -> origin/gh/bdhirsh/641/base 2025-03-17T17:41:35.9320507Z * [new branch] gh/bdhirsh/641/head -> origin/gh/bdhirsh/641/head 2025-03-17T17:41:35.9321455Z * [new branch] gh/bdhirsh/641/orig -> origin/gh/bdhirsh/641/orig 2025-03-17T17:41:35.9323123Z * [new branch] gh/bdhirsh/642/base -> origin/gh/bdhirsh/642/base 2025-03-17T17:41:35.9324118Z * [new branch] gh/bdhirsh/642/head -> origin/gh/bdhirsh/642/head 2025-03-17T17:41:35.9325102Z * [new branch] gh/bdhirsh/642/orig -> origin/gh/bdhirsh/642/orig 2025-03-17T17:41:35.9326545Z * [new branch] gh/bdhirsh/643/base -> origin/gh/bdhirsh/643/base 2025-03-17T17:41:35.9327430Z * [new branch] gh/bdhirsh/643/head -> origin/gh/bdhirsh/643/head 2025-03-17T17:41:35.9328434Z * [new branch] gh/bdhirsh/643/orig -> origin/gh/bdhirsh/643/orig 2025-03-17T17:41:35.9329854Z * [new branch] gh/bdhirsh/644/base -> origin/gh/bdhirsh/644/base 2025-03-17T17:41:35.9331007Z * [new branch] gh/bdhirsh/644/head -> origin/gh/bdhirsh/644/head 2025-03-17T17:41:35.9331888Z * [new branch] gh/bdhirsh/644/orig -> origin/gh/bdhirsh/644/orig 2025-03-17T17:41:35.9333415Z * [new branch] gh/bdhirsh/645/base -> origin/gh/bdhirsh/645/base 2025-03-17T17:41:35.9334339Z * [new branch] gh/bdhirsh/645/head -> origin/gh/bdhirsh/645/head 2025-03-17T17:41:35.9335330Z * [new branch] gh/bdhirsh/645/orig -> origin/gh/bdhirsh/645/orig 2025-03-17T17:41:35.9343938Z * [new branch] gh/bdhirsh/646/base -> origin/gh/bdhirsh/646/base 2025-03-17T17:41:35.9346614Z * [new branch] gh/bdhirsh/646/head -> origin/gh/bdhirsh/646/head 2025-03-17T17:41:35.9347642Z * [new branch] gh/bdhirsh/646/orig -> origin/gh/bdhirsh/646/orig 2025-03-17T17:41:35.9349411Z * [new branch] gh/benjaminglass1/51/base -> origin/gh/benjaminglass1/51/base 2025-03-17T17:41:35.9350384Z * [new branch] gh/benjaminglass1/51/head -> origin/gh/benjaminglass1/51/head 2025-03-17T17:41:35.9351487Z * [new branch] gh/benjaminglass1/51/orig -> origin/gh/benjaminglass1/51/orig 2025-03-17T17:41:35.9352885Z * [new branch] gh/benjaminglass1/52/base -> origin/gh/benjaminglass1/52/base 2025-03-17T17:41:35.9353748Z * [new branch] gh/benjaminglass1/52/head -> origin/gh/benjaminglass1/52/head 2025-03-17T17:41:35.9354745Z * [new branch] gh/benjaminglass1/52/orig -> origin/gh/benjaminglass1/52/orig 2025-03-17T17:41:35.9356110Z * [new branch] gh/benjaminglass1/63/base -> origin/gh/benjaminglass1/63/base 2025-03-17T17:41:35.9357016Z * [new branch] gh/benjaminglass1/63/head -> origin/gh/benjaminglass1/63/head 2025-03-17T17:41:35.9358401Z * [new branch] gh/benjaminglass1/63/orig -> origin/gh/benjaminglass1/63/orig 2025-03-17T17:41:35.9359408Z * [new branch] gh/benjaminglass1/64/base -> origin/gh/benjaminglass1/64/base 2025-03-17T17:41:35.9360374Z * [new branch] gh/benjaminglass1/64/head -> origin/gh/benjaminglass1/64/head 2025-03-17T17:41:35.9361322Z * [new branch] gh/benjaminglass1/64/orig -> origin/gh/benjaminglass1/64/orig 2025-03-17T17:41:35.9362723Z * [new branch] gh/benjaminglass1/65/base -> origin/gh/benjaminglass1/65/base 2025-03-17T17:41:35.9363592Z * [new branch] gh/benjaminglass1/65/head -> origin/gh/benjaminglass1/65/head 2025-03-17T17:41:35.9364555Z * [new branch] gh/benjaminglass1/65/orig -> origin/gh/benjaminglass1/65/orig 2025-03-17T17:41:35.9365942Z * [new branch] gh/benjaminglass1/66/base -> origin/gh/benjaminglass1/66/base 2025-03-17T17:41:35.9366816Z * [new branch] gh/benjaminglass1/66/head -> origin/gh/benjaminglass1/66/head 2025-03-17T17:41:35.9367810Z * [new branch] gh/benjaminglass1/66/orig -> origin/gh/benjaminglass1/66/orig 2025-03-17T17:41:35.9369170Z * [new branch] gh/benjaminglass1/67/base -> origin/gh/benjaminglass1/67/base 2025-03-17T17:41:35.9370130Z * [new branch] gh/benjaminglass1/67/head -> origin/gh/benjaminglass1/67/head 2025-03-17T17:41:35.9371131Z * [new branch] gh/benjaminglass1/67/orig -> origin/gh/benjaminglass1/67/orig 2025-03-17T17:41:35.9372536Z * [new branch] gh/benjaminglass1/68/base -> origin/gh/benjaminglass1/68/base 2025-03-17T17:41:35.9373411Z * [new branch] gh/benjaminglass1/68/head -> origin/gh/benjaminglass1/68/head 2025-03-17T17:41:35.9374387Z * [new branch] gh/benjaminglass1/68/orig -> origin/gh/benjaminglass1/68/orig 2025-03-17T17:41:35.9375818Z * [new branch] gh/benjaminglass1/69/base -> origin/gh/benjaminglass1/69/base 2025-03-17T17:41:35.9376674Z * [new branch] gh/benjaminglass1/69/head -> origin/gh/benjaminglass1/69/head 2025-03-17T17:41:35.9377786Z * [new branch] gh/benjaminglass1/69/orig -> origin/gh/benjaminglass1/69/orig 2025-03-17T17:41:35.9379032Z * [new branch] gh/benjaminglass1/70/base -> origin/gh/benjaminglass1/70/base 2025-03-17T17:41:35.9379911Z * [new branch] gh/benjaminglass1/70/head -> origin/gh/benjaminglass1/70/head 2025-03-17T17:41:35.9380898Z * [new branch] gh/benjaminglass1/70/orig -> origin/gh/benjaminglass1/70/orig 2025-03-17T17:41:35.9382336Z * [new branch] gh/benjaminglass1/71/base -> origin/gh/benjaminglass1/71/base 2025-03-17T17:41:35.9383241Z * [new branch] gh/benjaminglass1/71/head -> origin/gh/benjaminglass1/71/head 2025-03-17T17:41:35.9384245Z * [new branch] gh/benjaminglass1/71/orig -> origin/gh/benjaminglass1/71/orig 2025-03-17T17:41:35.9385663Z * [new branch] gh/benjaminglass1/72/base -> origin/gh/benjaminglass1/72/base 2025-03-17T17:41:35.9386606Z * [new branch] gh/benjaminglass1/72/head -> origin/gh/benjaminglass1/72/head 2025-03-17T17:41:35.9387711Z * [new branch] gh/benjaminglass1/72/orig -> origin/gh/benjaminglass1/72/orig 2025-03-17T17:41:35.9389072Z * [new branch] gh/benjaminglass1/73/base -> origin/gh/benjaminglass1/73/base 2025-03-17T17:41:35.9389993Z * [new branch] gh/benjaminglass1/73/head -> origin/gh/benjaminglass1/73/head 2025-03-17T17:41:35.9391033Z * [new branch] gh/benjaminglass1/73/orig -> origin/gh/benjaminglass1/73/orig 2025-03-17T17:41:35.9392546Z * [new branch] gh/benjaminglass1/74/base -> origin/gh/benjaminglass1/74/base 2025-03-17T17:41:35.9393553Z * [new branch] gh/benjaminglass1/74/head -> origin/gh/benjaminglass1/74/head 2025-03-17T17:41:35.9394624Z * [new branch] gh/benjaminglass1/74/orig -> origin/gh/benjaminglass1/74/orig 2025-03-17T17:41:35.9396014Z * [new branch] gh/benjaminglass1/75/base -> origin/gh/benjaminglass1/75/base 2025-03-17T17:41:35.9396910Z * [new branch] gh/benjaminglass1/75/head -> origin/gh/benjaminglass1/75/head 2025-03-17T17:41:35.9397900Z * [new branch] gh/benjaminglass1/75/orig -> origin/gh/benjaminglass1/75/orig 2025-03-17T17:41:35.9399276Z * [new branch] gh/benjaminglass1/76/base -> origin/gh/benjaminglass1/76/base 2025-03-17T17:41:35.9400177Z * [new branch] gh/benjaminglass1/76/head -> origin/gh/benjaminglass1/76/head 2025-03-17T17:41:35.9401140Z * [new branch] gh/benjaminglass1/76/orig -> origin/gh/benjaminglass1/76/orig 2025-03-17T17:41:35.9402494Z * [new branch] gh/benjaminglass1/77/base -> origin/gh/benjaminglass1/77/base 2025-03-17T17:41:35.9403395Z * [new branch] gh/benjaminglass1/77/head -> origin/gh/benjaminglass1/77/head 2025-03-17T17:41:35.9404391Z * [new branch] gh/benjaminglass1/77/orig -> origin/gh/benjaminglass1/77/orig 2025-03-17T17:41:35.9406057Z * [new branch] gh/bertmaher/6/base -> origin/gh/bertmaher/6/base 2025-03-17T17:41:35.9407282Z * [new branch] gh/bertmaher/6/head -> origin/gh/bertmaher/6/head 2025-03-17T17:41:35.9407916Z * [new branch] gh/bertmaher/6/orig -> origin/gh/bertmaher/6/orig 2025-03-17T17:41:35.9409690Z * [new branch] gh/bobrenjc93/207/base -> origin/gh/bobrenjc93/207/base 2025-03-17T17:41:35.9410698Z * [new branch] gh/bobrenjc93/207/head -> origin/gh/bobrenjc93/207/head 2025-03-17T17:41:35.9411629Z * [new branch] gh/bobrenjc93/207/orig -> origin/gh/bobrenjc93/207/orig 2025-03-17T17:41:35.9413029Z * [new branch] gh/bobrenjc93/270/base -> origin/gh/bobrenjc93/270/base 2025-03-17T17:41:35.9413959Z * [new branch] gh/bobrenjc93/270/head -> origin/gh/bobrenjc93/270/head 2025-03-17T17:41:35.9414941Z * [new branch] gh/bobrenjc93/270/orig -> origin/gh/bobrenjc93/270/orig 2025-03-17T17:41:35.9416458Z * [new branch] gh/bobrenjc93/272/base -> origin/gh/bobrenjc93/272/base 2025-03-17T17:41:35.9417336Z * [new branch] gh/bobrenjc93/272/head -> origin/gh/bobrenjc93/272/head 2025-03-17T17:41:35.9418338Z * [new branch] gh/bobrenjc93/272/orig -> origin/gh/bobrenjc93/272/orig 2025-03-17T17:41:35.9419599Z * [new branch] gh/bobrenjc93/273/base -> origin/gh/bobrenjc93/273/base 2025-03-17T17:41:35.9420563Z * [new branch] gh/bobrenjc93/273/head -> origin/gh/bobrenjc93/273/head 2025-03-17T17:41:35.9421504Z * [new branch] gh/bobrenjc93/273/orig -> origin/gh/bobrenjc93/273/orig 2025-03-17T17:41:35.9422788Z * [new branch] gh/bobrenjc93/274/base -> origin/gh/bobrenjc93/274/base 2025-03-17T17:41:35.9423588Z * [new branch] gh/bobrenjc93/274/head -> origin/gh/bobrenjc93/274/head 2025-03-17T17:41:35.9424534Z * [new branch] gh/bobrenjc93/274/orig -> origin/gh/bobrenjc93/274/orig 2025-03-17T17:41:35.9425786Z * [new branch] gh/bobrenjc93/275/base -> origin/gh/bobrenjc93/275/base 2025-03-17T17:41:35.9427178Z * [new branch] gh/bobrenjc93/275/head -> origin/gh/bobrenjc93/275/head 2025-03-17T17:41:35.9427879Z * [new branch] gh/bobrenjc93/275/orig -> origin/gh/bobrenjc93/275/orig 2025-03-17T17:41:35.9428971Z * [new branch] gh/bobrenjc93/276/base -> origin/gh/bobrenjc93/276/base 2025-03-17T17:41:35.9429940Z * [new branch] gh/bobrenjc93/276/head -> origin/gh/bobrenjc93/276/head 2025-03-17T17:41:35.9430935Z * [new branch] gh/bobrenjc93/276/orig -> origin/gh/bobrenjc93/276/orig 2025-03-17T17:41:35.9432199Z * [new branch] gh/bobrenjc93/277/base -> origin/gh/bobrenjc93/277/base 2025-03-17T17:41:35.9433146Z * [new branch] gh/bobrenjc93/277/head -> origin/gh/bobrenjc93/277/head 2025-03-17T17:41:35.9434116Z * [new branch] gh/bobrenjc93/277/orig -> origin/gh/bobrenjc93/277/orig 2025-03-17T17:41:35.9435303Z * [new branch] gh/bobrenjc93/278/base -> origin/gh/bobrenjc93/278/base 2025-03-17T17:41:35.9436357Z * [new branch] gh/bobrenjc93/278/head -> origin/gh/bobrenjc93/278/head 2025-03-17T17:41:35.9437543Z * [new branch] gh/bobrenjc93/278/orig -> origin/gh/bobrenjc93/278/orig 2025-03-17T17:41:35.9439038Z * [new branch] gh/bobrenjc93/279/base -> origin/gh/bobrenjc93/279/base 2025-03-17T17:41:35.9439875Z * [new branch] gh/bobrenjc93/279/head -> origin/gh/bobrenjc93/279/head 2025-03-17T17:41:35.9440875Z * [new branch] gh/bobrenjc93/279/orig -> origin/gh/bobrenjc93/279/orig 2025-03-17T17:41:35.9442353Z * [new branch] gh/bobrenjc93/280/base -> origin/gh/bobrenjc93/280/base 2025-03-17T17:41:35.9443358Z * [new branch] gh/bobrenjc93/280/head -> origin/gh/bobrenjc93/280/head 2025-03-17T17:41:35.9444213Z * [new branch] gh/bobrenjc93/280/orig -> origin/gh/bobrenjc93/280/orig 2025-03-17T17:41:35.9445620Z * [new branch] gh/bobrenjc93/281/base -> origin/gh/bobrenjc93/281/base 2025-03-17T17:41:35.9446549Z * [new branch] gh/bobrenjc93/281/head -> origin/gh/bobrenjc93/281/head 2025-03-17T17:41:35.9447498Z * [new branch] gh/bobrenjc93/281/orig -> origin/gh/bobrenjc93/281/orig 2025-03-17T17:41:35.9448918Z * [new branch] gh/bobrenjc93/282/base -> origin/gh/bobrenjc93/282/base 2025-03-17T17:41:35.9449923Z * [new branch] gh/bobrenjc93/282/head -> origin/gh/bobrenjc93/282/head 2025-03-17T17:41:35.9450901Z * [new branch] gh/bobrenjc93/282/orig -> origin/gh/bobrenjc93/282/orig 2025-03-17T17:41:35.9452254Z * [new branch] gh/bobrenjc93/283/base -> origin/gh/bobrenjc93/283/base 2025-03-17T17:41:35.9453161Z * [new branch] gh/bobrenjc93/283/head -> origin/gh/bobrenjc93/283/head 2025-03-17T17:41:35.9454373Z * [new branch] gh/bobrenjc93/283/orig -> origin/gh/bobrenjc93/283/orig 2025-03-17T17:41:35.9455417Z * [new branch] gh/bobrenjc93/284/base -> origin/gh/bobrenjc93/284/base 2025-03-17T17:41:35.9456380Z * [new branch] gh/bobrenjc93/284/head -> origin/gh/bobrenjc93/284/head 2025-03-17T17:41:35.9457426Z * [new branch] gh/bobrenjc93/284/orig -> origin/gh/bobrenjc93/284/orig 2025-03-17T17:41:35.9458680Z * [new branch] gh/bobrenjc93/285/base -> origin/gh/bobrenjc93/285/base 2025-03-17T17:41:35.9459552Z * [new branch] gh/bobrenjc93/285/head -> origin/gh/bobrenjc93/285/head 2025-03-17T17:41:35.9460537Z * [new branch] gh/bobrenjc93/285/orig -> origin/gh/bobrenjc93/285/orig 2025-03-17T17:41:35.9462348Z * [new branch] gh/bobrenjc93/286/base -> origin/gh/bobrenjc93/286/base 2025-03-17T17:41:35.9463200Z * [new branch] gh/bobrenjc93/286/head -> origin/gh/bobrenjc93/286/head 2025-03-17T17:41:35.9464061Z * [new branch] gh/bobrenjc93/286/orig -> origin/gh/bobrenjc93/286/orig 2025-03-17T17:41:35.9465397Z * [new branch] gh/bobrenjc93/287/base -> origin/gh/bobrenjc93/287/base 2025-03-17T17:41:35.9466455Z * [new branch] gh/bobrenjc93/287/head -> origin/gh/bobrenjc93/287/head 2025-03-17T17:41:35.9467556Z * [new branch] gh/bobrenjc93/287/orig -> origin/gh/bobrenjc93/287/orig 2025-03-17T17:41:35.9469057Z * [new branch] gh/bobrenjc93/288/base -> origin/gh/bobrenjc93/288/base 2025-03-17T17:41:35.9470033Z * [new branch] gh/bobrenjc93/288/head -> origin/gh/bobrenjc93/288/head 2025-03-17T17:41:35.9471029Z * [new branch] gh/bobrenjc93/288/orig -> origin/gh/bobrenjc93/288/orig 2025-03-17T17:41:35.9472330Z * [new branch] gh/bobrenjc93/289/base -> origin/gh/bobrenjc93/289/base 2025-03-17T17:41:35.9473236Z * [new branch] gh/bobrenjc93/289/head -> origin/gh/bobrenjc93/289/head 2025-03-17T17:41:35.9474165Z * [new branch] gh/bobrenjc93/289/orig -> origin/gh/bobrenjc93/289/orig 2025-03-17T17:41:35.9475708Z * [new branch] gh/bobrenjc93/290/base -> origin/gh/bobrenjc93/290/base 2025-03-17T17:41:35.9476725Z * [new branch] gh/bobrenjc93/290/head -> origin/gh/bobrenjc93/290/head 2025-03-17T17:41:35.9477746Z * [new branch] gh/bobrenjc93/290/orig -> origin/gh/bobrenjc93/290/orig 2025-03-17T17:41:35.9479023Z * [new branch] gh/bobrenjc93/291/base -> origin/gh/bobrenjc93/291/base 2025-03-17T17:41:35.9479931Z * [new branch] gh/bobrenjc93/291/head -> origin/gh/bobrenjc93/291/head 2025-03-17T17:41:35.9480971Z * [new branch] gh/bobrenjc93/291/orig -> origin/gh/bobrenjc93/291/orig 2025-03-17T17:41:35.9482370Z * [new branch] gh/bobrenjc93/292/base -> origin/gh/bobrenjc93/292/base 2025-03-17T17:41:35.9483213Z * [new branch] gh/bobrenjc93/292/head -> origin/gh/bobrenjc93/292/head 2025-03-17T17:41:35.9484101Z * [new branch] gh/bobrenjc93/292/orig -> origin/gh/bobrenjc93/292/orig 2025-03-17T17:41:35.9485800Z * [new branch] gh/bobrenjc93/293/base -> origin/gh/bobrenjc93/293/base 2025-03-17T17:41:35.9486658Z * [new branch] gh/bobrenjc93/293/head -> origin/gh/bobrenjc93/293/head 2025-03-17T17:41:35.9487672Z * [new branch] gh/bobrenjc93/293/orig -> origin/gh/bobrenjc93/293/orig 2025-03-17T17:41:35.9489125Z * [new branch] gh/bobrenjc93/294/base -> origin/gh/bobrenjc93/294/base 2025-03-17T17:41:35.9490160Z * [new branch] gh/bobrenjc93/294/head -> origin/gh/bobrenjc93/294/head 2025-03-17T17:41:35.9491204Z * [new branch] gh/bobrenjc93/294/orig -> origin/gh/bobrenjc93/294/orig 2025-03-17T17:41:35.9492578Z * [new branch] gh/bobrenjc93/295/base -> origin/gh/bobrenjc93/295/base 2025-03-17T17:41:35.9493572Z * [new branch] gh/bobrenjc93/295/head -> origin/gh/bobrenjc93/295/head 2025-03-17T17:41:35.9494600Z * [new branch] gh/bobrenjc93/295/orig -> origin/gh/bobrenjc93/295/orig 2025-03-17T17:41:35.9496427Z * [new branch] gh/bobrenjc93/296/base -> origin/gh/bobrenjc93/296/base 2025-03-17T17:41:35.9497313Z * [new branch] gh/bobrenjc93/296/head -> origin/gh/bobrenjc93/296/head 2025-03-17T17:41:35.9498307Z * [new branch] gh/bobrenjc93/296/orig -> origin/gh/bobrenjc93/296/orig 2025-03-17T17:41:35.9499792Z * [new branch] gh/bobrenjc93/297/base -> origin/gh/bobrenjc93/297/base 2025-03-17T17:41:35.9500778Z * [new branch] gh/bobrenjc93/297/head -> origin/gh/bobrenjc93/297/head 2025-03-17T17:41:35.9501763Z * [new branch] gh/bobrenjc93/297/orig -> origin/gh/bobrenjc93/297/orig 2025-03-17T17:41:35.9503256Z * [new branch] gh/bobrenjc93/298/base -> origin/gh/bobrenjc93/298/base 2025-03-17T17:41:35.9504260Z * [new branch] gh/bobrenjc93/298/head -> origin/gh/bobrenjc93/298/head 2025-03-17T17:41:35.9505290Z * [new branch] gh/bobrenjc93/298/orig -> origin/gh/bobrenjc93/298/orig 2025-03-17T17:41:35.9507148Z * [new branch] gh/bobrenjc93/299/base -> origin/gh/bobrenjc93/299/base 2025-03-17T17:41:35.9508078Z * [new branch] gh/bobrenjc93/299/head -> origin/gh/bobrenjc93/299/head 2025-03-17T17:41:35.9509072Z * [new branch] gh/bobrenjc93/299/orig -> origin/gh/bobrenjc93/299/orig 2025-03-17T17:41:35.9510809Z * [new branch] gh/briancoutinho/2/base -> origin/gh/briancoutinho/2/base 2025-03-17T17:41:35.9511739Z * [new branch] gh/briancoutinho/2/head -> origin/gh/briancoutinho/2/head 2025-03-17T17:41:35.9513295Z * [new branch] gh/c00w/23/base -> origin/gh/c00w/23/base 2025-03-17T17:41:35.9514228Z * [new branch] gh/c00w/23/head -> origin/gh/c00w/23/head 2025-03-17T17:41:35.9515682Z * [new branch] gh/c00w/37/base -> origin/gh/c00w/37/base 2025-03-17T17:41:35.9516574Z * [new branch] gh/c00w/37/head -> origin/gh/c00w/37/head 2025-03-17T17:41:35.9517561Z * [new branch] gh/c00w/37/orig -> origin/gh/c00w/37/orig 2025-03-17T17:41:35.9519155Z * [new branch] gh/c00w/38/base -> origin/gh/c00w/38/base 2025-03-17T17:41:35.9520028Z * [new branch] gh/c00w/38/head -> origin/gh/c00w/38/head 2025-03-17T17:41:35.9520980Z * [new branch] gh/c00w/38/orig -> origin/gh/c00w/38/orig 2025-03-17T17:41:35.9523007Z * [new branch] gh/c00w/39/base -> origin/gh/c00w/39/base 2025-03-17T17:41:35.9523888Z * [new branch] gh/c00w/39/head -> origin/gh/c00w/39/head 2025-03-17T17:41:35.9525218Z * [new branch] gh/c00w/39/orig -> origin/gh/c00w/39/orig 2025-03-17T17:41:35.9526647Z * [new branch] gh/c00w/40/base -> origin/gh/c00w/40/base 2025-03-17T17:41:35.9527630Z * [new branch] gh/c00w/40/head -> origin/gh/c00w/40/head 2025-03-17T17:41:35.9528614Z * [new branch] gh/c00w/40/orig -> origin/gh/c00w/40/orig 2025-03-17T17:41:35.9530063Z * [new branch] gh/c00w/41/base -> origin/gh/c00w/41/base 2025-03-17T17:41:35.9530991Z * [new branch] gh/c00w/41/head -> origin/gh/c00w/41/head 2025-03-17T17:41:35.9532042Z * [new branch] gh/c00w/41/orig -> origin/gh/c00w/41/orig 2025-03-17T17:41:35.9533435Z * [new branch] gh/c00w/42/base -> origin/gh/c00w/42/base 2025-03-17T17:41:35.9534419Z * [new branch] gh/c00w/42/head -> origin/gh/c00w/42/head 2025-03-17T17:41:35.9535688Z * [new branch] gh/c00w/42/orig -> origin/gh/c00w/42/orig 2025-03-17T17:41:35.9537120Z * [new branch] gh/c00w/43/base -> origin/gh/c00w/43/base 2025-03-17T17:41:35.9538202Z * [new branch] gh/c00w/43/head -> origin/gh/c00w/43/head 2025-03-17T17:41:35.9539165Z * [new branch] gh/c00w/43/orig -> origin/gh/c00w/43/orig 2025-03-17T17:41:35.9540483Z * [new branch] gh/c00w/44/base -> origin/gh/c00w/44/base 2025-03-17T17:41:35.9541385Z * [new branch] gh/c00w/44/head -> origin/gh/c00w/44/head 2025-03-17T17:41:35.9542428Z * [new branch] gh/c00w/44/orig -> origin/gh/c00w/44/orig 2025-03-17T17:41:35.9543808Z * [new branch] gh/c00w/45/base -> origin/gh/c00w/45/base 2025-03-17T17:41:35.9544696Z * [new branch] gh/c00w/45/head -> origin/gh/c00w/45/head 2025-03-17T17:41:35.9545640Z * [new branch] gh/c00w/45/orig -> origin/gh/c00w/45/orig 2025-03-17T17:41:35.9547490Z * [new branch] gh/chenyang78/1/base -> origin/gh/chenyang78/1/base 2025-03-17T17:41:35.9548420Z * [new branch] gh/chenyang78/1/head -> origin/gh/chenyang78/1/head 2025-03-17T17:41:35.9549355Z * [new branch] gh/chenyang78/1/orig -> origin/gh/chenyang78/1/orig 2025-03-17T17:41:35.9550744Z * [new branch] gh/chenyang78/2/base -> origin/gh/chenyang78/2/base 2025-03-17T17:41:35.9551624Z * [new branch] gh/chenyang78/2/head -> origin/gh/chenyang78/2/head 2025-03-17T17:41:35.9552620Z * [new branch] gh/chenyang78/2/orig -> origin/gh/chenyang78/2/orig 2025-03-17T17:41:35.9554371Z * [new branch] gh/chillee/220/base -> origin/gh/chillee/220/base 2025-03-17T17:41:35.9555354Z * [new branch] gh/chillee/220/head -> origin/gh/chillee/220/head 2025-03-17T17:41:35.9556316Z * [new branch] gh/chillee/220/orig -> origin/gh/chillee/220/orig 2025-03-17T17:41:35.9557739Z * [new branch] gh/chillee/376/base -> origin/gh/chillee/376/base 2025-03-17T17:41:35.9558622Z * [new branch] gh/chillee/376/head -> origin/gh/chillee/376/head 2025-03-17T17:41:35.9559563Z * [new branch] gh/chillee/376/orig -> origin/gh/chillee/376/orig 2025-03-17T17:41:35.9560982Z * [new branch] gh/chillee/377/base -> origin/gh/chillee/377/base 2025-03-17T17:41:35.9561877Z * [new branch] gh/chillee/377/head -> origin/gh/chillee/377/head 2025-03-17T17:41:35.9563003Z * [new branch] gh/chillee/377/orig -> origin/gh/chillee/377/orig 2025-03-17T17:41:35.9564528Z * [new branch] gh/chunyuan-w/1/base -> origin/gh/chunyuan-w/1/base 2025-03-17T17:41:35.9565419Z * [new branch] gh/chunyuan-w/1/head -> origin/gh/chunyuan-w/1/head 2025-03-17T17:41:35.9566400Z * [new branch] gh/chunyuan-w/1/orig -> origin/gh/chunyuan-w/1/orig 2025-03-17T17:41:35.9567773Z * [new branch] gh/chunyuan-w/3/base -> origin/gh/chunyuan-w/3/base 2025-03-17T17:41:35.9568670Z * [new branch] gh/chunyuan-w/3/head -> origin/gh/chunyuan-w/3/head 2025-03-17T17:41:35.9569641Z * [new branch] gh/chunyuan-w/3/orig -> origin/gh/chunyuan-w/3/orig 2025-03-17T17:41:35.9571251Z * [new branch] gh/clee2000/1/base -> origin/gh/clee2000/1/base 2025-03-17T17:41:35.9572210Z * [new branch] gh/clee2000/1/head -> origin/gh/clee2000/1/head 2025-03-17T17:41:35.9573191Z * [new branch] gh/clee2000/1/orig -> origin/gh/clee2000/1/orig 2025-03-17T17:41:35.9574581Z * [new branch] gh/clee2000/2/base -> origin/gh/clee2000/2/base 2025-03-17T17:41:35.9575670Z * [new branch] gh/clee2000/2/head -> origin/gh/clee2000/2/head 2025-03-17T17:41:35.9576405Z * [new branch] gh/clee2000/2/orig -> origin/gh/clee2000/2/orig 2025-03-17T17:41:35.9577766Z * [new branch] gh/clee2000/3/base -> origin/gh/clee2000/3/base 2025-03-17T17:41:35.9578621Z * [new branch] gh/clee2000/3/head -> origin/gh/clee2000/3/head 2025-03-17T17:41:35.9579585Z * [new branch] gh/clee2000/3/orig -> origin/gh/clee2000/3/orig 2025-03-17T17:41:35.9581605Z * [new branch] gh/davidberard98/230/base -> origin/gh/davidberard98/230/base 2025-03-17T17:41:35.9582587Z * [new branch] gh/davidberard98/230/head -> origin/gh/davidberard98/230/head 2025-03-17T17:41:35.9583602Z * [new branch] gh/davidberard98/230/orig -> origin/gh/davidberard98/230/orig 2025-03-17T17:41:35.9585003Z * [new branch] gh/davidberard98/335/base -> origin/gh/davidberard98/335/base 2025-03-17T17:41:35.9585986Z * [new branch] gh/davidberard98/335/head -> origin/gh/davidberard98/335/head 2025-03-17T17:41:35.9586996Z * [new branch] gh/davidberard98/335/orig -> origin/gh/davidberard98/335/orig 2025-03-17T17:41:35.9588438Z * [new branch] gh/davidberard98/338/base -> origin/gh/davidberard98/338/base 2025-03-17T17:41:35.9589351Z * [new branch] gh/davidberard98/338/head -> origin/gh/davidberard98/338/head 2025-03-17T17:41:35.9590376Z * [new branch] gh/davidberard98/338/orig -> origin/gh/davidberard98/338/orig 2025-03-17T17:41:35.9591774Z * [new branch] gh/davidberard98/339/base -> origin/gh/davidberard98/339/base 2025-03-17T17:41:35.9592708Z * [new branch] gh/davidberard98/339/head -> origin/gh/davidberard98/339/head 2025-03-17T17:41:35.9593730Z * [new branch] gh/davidberard98/339/orig -> origin/gh/davidberard98/339/orig 2025-03-17T17:41:35.9595256Z * [new branch] gh/davidberard98/340/base -> origin/gh/davidberard98/340/base 2025-03-17T17:41:35.9596332Z * [new branch] gh/davidberard98/340/head -> origin/gh/davidberard98/340/head 2025-03-17T17:41:35.9597586Z * [new branch] gh/davidberard98/340/orig -> origin/gh/davidberard98/340/orig 2025-03-17T17:41:35.9599054Z * [new branch] gh/davidberard98/341/base -> origin/gh/davidberard98/341/base 2025-03-17T17:41:35.9599958Z * [new branch] gh/davidberard98/341/head -> origin/gh/davidberard98/341/head 2025-03-17T17:41:35.9601006Z * [new branch] gh/davidberard98/341/orig -> origin/gh/davidberard98/341/orig 2025-03-17T17:41:35.9602396Z * [new branch] gh/davidberard98/342/base -> origin/gh/davidberard98/342/base 2025-03-17T17:41:35.9603277Z * [new branch] gh/davidberard98/342/head -> origin/gh/davidberard98/342/head 2025-03-17T17:41:35.9604248Z * [new branch] gh/davidberard98/342/orig -> origin/gh/davidberard98/342/orig 2025-03-17T17:41:35.9605631Z * [new branch] gh/davidberard98/343/base -> origin/gh/davidberard98/343/base 2025-03-17T17:41:35.9606554Z * [new branch] gh/davidberard98/343/head -> origin/gh/davidberard98/343/head 2025-03-17T17:41:35.9607517Z * [new branch] gh/davidberard98/343/orig -> origin/gh/davidberard98/343/orig 2025-03-17T17:41:35.9608937Z * [new branch] gh/davidberard98/344/base -> origin/gh/davidberard98/344/base 2025-03-17T17:41:35.9609859Z * [new branch] gh/davidberard98/344/head -> origin/gh/davidberard98/344/head 2025-03-17T17:41:35.9610951Z * [new branch] gh/davidberard98/344/orig -> origin/gh/davidberard98/344/orig 2025-03-17T17:41:35.9612424Z * [new branch] gh/davidberard98/345/base -> origin/gh/davidberard98/345/base 2025-03-17T17:41:35.9613400Z * [new branch] gh/davidberard98/345/head -> origin/gh/davidberard98/345/head 2025-03-17T17:41:35.9614460Z * [new branch] gh/davidberard98/345/orig -> origin/gh/davidberard98/345/orig 2025-03-17T17:41:35.9615814Z * [new branch] gh/davidberard98/346/base -> origin/gh/davidberard98/346/base 2025-03-17T17:41:35.9616747Z * [new branch] gh/davidberard98/346/head -> origin/gh/davidberard98/346/head 2025-03-17T17:41:35.9617765Z * [new branch] gh/davidberard98/346/orig -> origin/gh/davidberard98/346/orig 2025-03-17T17:41:35.9619358Z * [new branch] gh/desertfire/531/base -> origin/gh/desertfire/531/base 2025-03-17T17:41:35.9620261Z * [new branch] gh/desertfire/531/head -> origin/gh/desertfire/531/head 2025-03-17T17:41:35.9621293Z * [new branch] gh/desertfire/531/orig -> origin/gh/desertfire/531/orig 2025-03-17T17:41:35.9622672Z * [new branch] gh/desertfire/535/base -> origin/gh/desertfire/535/base 2025-03-17T17:41:35.9623664Z * [new branch] gh/desertfire/535/head -> origin/gh/desertfire/535/head 2025-03-17T17:41:35.9624955Z * [new branch] gh/desertfire/535/orig -> origin/gh/desertfire/535/orig 2025-03-17T17:41:35.9626016Z * [new branch] gh/desertfire/539/base -> origin/gh/desertfire/539/base 2025-03-17T17:41:35.9627467Z * [new branch] gh/desertfire/539/head -> origin/gh/desertfire/539/head 2025-03-17T17:41:35.9628217Z * [new branch] gh/desertfire/539/orig -> origin/gh/desertfire/539/orig 2025-03-17T17:41:35.9629766Z * [new branch] gh/desertfire/540/base -> origin/gh/desertfire/540/base 2025-03-17T17:41:35.9630548Z * [new branch] gh/desertfire/540/head -> origin/gh/desertfire/540/head 2025-03-17T17:41:35.9631564Z * [new branch] gh/desertfire/540/orig -> origin/gh/desertfire/540/orig 2025-03-17T17:41:35.9632818Z * [new branch] gh/desertfire/541/base -> origin/gh/desertfire/541/base 2025-03-17T17:41:35.9633834Z * [new branch] gh/desertfire/541/head -> origin/gh/desertfire/541/head 2025-03-17T17:41:35.9634760Z * [new branch] gh/desertfire/541/orig -> origin/gh/desertfire/541/orig 2025-03-17T17:41:35.9636118Z * [new branch] gh/desertfire/542/base -> origin/gh/desertfire/542/base 2025-03-17T17:41:35.9637162Z * [new branch] gh/desertfire/542/head -> origin/gh/desertfire/542/head 2025-03-17T17:41:35.9638365Z * [new branch] gh/desertfire/542/orig -> origin/gh/desertfire/542/orig 2025-03-17T17:41:35.9640172Z * [new branch] gh/desertfire/543/base -> origin/gh/desertfire/543/base 2025-03-17T17:41:35.9640975Z * [new branch] gh/desertfire/543/head -> origin/gh/desertfire/543/head 2025-03-17T17:41:35.9641961Z * [new branch] gh/desertfire/543/orig -> origin/gh/desertfire/543/orig 2025-03-17T17:41:35.9643414Z * [new branch] gh/desertfire/544/base -> origin/gh/desertfire/544/base 2025-03-17T17:41:35.9644385Z * [new branch] gh/desertfire/544/head -> origin/gh/desertfire/544/head 2025-03-17T17:41:35.9645396Z * [new branch] gh/desertfire/544/orig -> origin/gh/desertfire/544/orig 2025-03-17T17:41:35.9646687Z * [new branch] gh/desertfire/545/base -> origin/gh/desertfire/545/base 2025-03-17T17:41:35.9647471Z * [new branch] gh/desertfire/545/head -> origin/gh/desertfire/545/head 2025-03-17T17:41:35.9648416Z * [new branch] gh/desertfire/545/orig -> origin/gh/desertfire/545/orig 2025-03-17T17:41:35.9649740Z * [new branch] gh/desertfire/546/base -> origin/gh/desertfire/546/base 2025-03-17T17:41:35.9650589Z * [new branch] gh/desertfire/546/head -> origin/gh/desertfire/546/head 2025-03-17T17:41:35.9651569Z * [new branch] gh/desertfire/546/orig -> origin/gh/desertfire/546/orig 2025-03-17T17:41:35.9653231Z * [new branch] gh/desertfire/547/base -> origin/gh/desertfire/547/base 2025-03-17T17:41:35.9653842Z * [new branch] gh/desertfire/547/head -> origin/gh/desertfire/547/head 2025-03-17T17:41:35.9654758Z * [new branch] gh/desertfire/547/orig -> origin/gh/desertfire/547/orig 2025-03-17T17:41:35.9656271Z * [new branch] gh/desertfire/548/base -> origin/gh/desertfire/548/base 2025-03-17T17:41:35.9657186Z * [new branch] gh/desertfire/548/head -> origin/gh/desertfire/548/head 2025-03-17T17:41:35.9658152Z * [new branch] gh/desertfire/548/orig -> origin/gh/desertfire/548/orig 2025-03-17T17:41:35.9659579Z * [new branch] gh/desertfire/549/base -> origin/gh/desertfire/549/base 2025-03-17T17:41:35.9660463Z * [new branch] gh/desertfire/549/head -> origin/gh/desertfire/549/head 2025-03-17T17:41:35.9661509Z * [new branch] gh/desertfire/549/orig -> origin/gh/desertfire/549/orig 2025-03-17T17:41:35.9662929Z * [new branch] gh/desertfire/550/base -> origin/gh/desertfire/550/base 2025-03-17T17:41:35.9663897Z * [new branch] gh/desertfire/550/head -> origin/gh/desertfire/550/head 2025-03-17T17:41:35.9664911Z * [new branch] gh/desertfire/550/orig -> origin/gh/desertfire/550/orig 2025-03-17T17:41:35.9666276Z * [new branch] gh/desertfire/551/base -> origin/gh/desertfire/551/base 2025-03-17T17:41:35.9667278Z * [new branch] gh/desertfire/551/head -> origin/gh/desertfire/551/head 2025-03-17T17:41:35.9668344Z * [new branch] gh/desertfire/551/orig -> origin/gh/desertfire/551/orig 2025-03-17T17:41:35.9669882Z * [new branch] gh/desertfire/552/base -> origin/gh/desertfire/552/base 2025-03-17T17:41:35.9670772Z * [new branch] gh/desertfire/552/head -> origin/gh/desertfire/552/head 2025-03-17T17:41:35.9671794Z * [new branch] gh/desertfire/552/orig -> origin/gh/desertfire/552/orig 2025-03-17T17:41:35.9673570Z * [new branch] gh/desertfire/553/base -> origin/gh/desertfire/553/base 2025-03-17T17:41:35.9674468Z * [new branch] gh/desertfire/553/head -> origin/gh/desertfire/553/head 2025-03-17T17:41:35.9675398Z * [new branch] gh/desertfire/553/orig -> origin/gh/desertfire/553/orig 2025-03-17T17:41:35.9677457Z * [new branch] gh/desertfire/554/base -> origin/gh/desertfire/554/base 2025-03-17T17:41:35.9678270Z * [new branch] gh/desertfire/554/head -> origin/gh/desertfire/554/head 2025-03-17T17:41:35.9679228Z * [new branch] gh/desertfire/554/orig -> origin/gh/desertfire/554/orig 2025-03-17T17:41:35.9680660Z * [new branch] gh/desertfire/555/base -> origin/gh/desertfire/555/base 2025-03-17T17:41:35.9681589Z * [new branch] gh/desertfire/555/head -> origin/gh/desertfire/555/head 2025-03-17T17:41:35.9682462Z * [new branch] gh/desertfire/555/orig -> origin/gh/desertfire/555/orig 2025-03-17T17:41:35.9684145Z * [new branch] gh/drisspg/100/base -> origin/gh/drisspg/100/base 2025-03-17T17:41:35.9685035Z * [new branch] gh/drisspg/100/head -> origin/gh/drisspg/100/head 2025-03-17T17:41:35.9686015Z * [new branch] gh/drisspg/100/orig -> origin/gh/drisspg/100/orig 2025-03-17T17:41:35.9687391Z * [new branch] gh/drisspg/103/base -> origin/gh/drisspg/103/base 2025-03-17T17:41:35.9688416Z * [new branch] gh/drisspg/103/head -> origin/gh/drisspg/103/head 2025-03-17T17:41:35.9689405Z * [new branch] gh/drisspg/103/orig -> origin/gh/drisspg/103/orig 2025-03-17T17:41:35.9690826Z * [new branch] gh/drisspg/104/base -> origin/gh/drisspg/104/base 2025-03-17T17:41:35.9691707Z * [new branch] gh/drisspg/104/head -> origin/gh/drisspg/104/head 2025-03-17T17:41:35.9692789Z * [new branch] gh/drisspg/104/orig -> origin/gh/drisspg/104/orig 2025-03-17T17:41:35.9694517Z * [new branch] gh/drisspg/111/base -> origin/gh/drisspg/111/base 2025-03-17T17:41:35.9695076Z * [new branch] gh/drisspg/111/head -> origin/gh/drisspg/111/head 2025-03-17T17:41:35.9696050Z * [new branch] gh/drisspg/111/orig -> origin/gh/drisspg/111/orig 2025-03-17T17:41:35.9697608Z * [new branch] gh/drisspg/115/base -> origin/gh/drisspg/115/base 2025-03-17T17:41:35.9699277Z * [new branch] gh/drisspg/115/head -> origin/gh/drisspg/115/head 2025-03-17T17:41:35.9700228Z * [new branch] gh/drisspg/115/orig -> origin/gh/drisspg/115/orig 2025-03-17T17:41:35.9701612Z * [new branch] gh/drisspg/119/base -> origin/gh/drisspg/119/base 2025-03-17T17:41:35.9702508Z * [new branch] gh/drisspg/119/head -> origin/gh/drisspg/119/head 2025-03-17T17:41:35.9703485Z * [new branch] gh/drisspg/119/orig -> origin/gh/drisspg/119/orig 2025-03-17T17:41:35.9704948Z * [new branch] gh/drisspg/123/base -> origin/gh/drisspg/123/base 2025-03-17T17:41:35.9705875Z * [new branch] gh/drisspg/123/head -> origin/gh/drisspg/123/head 2025-03-17T17:41:35.9706893Z * [new branch] gh/drisspg/123/orig -> origin/gh/drisspg/123/orig 2025-03-17T17:41:35.9708330Z * [new branch] gh/drisspg/125/base -> origin/gh/drisspg/125/base 2025-03-17T17:41:35.9709207Z * [new branch] gh/drisspg/125/head -> origin/gh/drisspg/125/head 2025-03-17T17:41:35.9710179Z * [new branch] gh/drisspg/125/orig -> origin/gh/drisspg/125/orig 2025-03-17T17:41:35.9711560Z * [new branch] gh/drisspg/126/base -> origin/gh/drisspg/126/base 2025-03-17T17:41:35.9712467Z * [new branch] gh/drisspg/126/head -> origin/gh/drisspg/126/head 2025-03-17T17:41:35.9713487Z * [new branch] gh/drisspg/126/orig -> origin/gh/drisspg/126/orig 2025-03-17T17:41:35.9714768Z * [new branch] gh/drisspg/127/base -> origin/gh/drisspg/127/base 2025-03-17T17:41:35.9715662Z * [new branch] gh/drisspg/127/head -> origin/gh/drisspg/127/head 2025-03-17T17:41:35.9716601Z * [new branch] gh/drisspg/127/orig -> origin/gh/drisspg/127/orig 2025-03-17T17:41:35.9717950Z * [new branch] gh/drisspg/128/base -> origin/gh/drisspg/128/base 2025-03-17T17:41:35.9718811Z * [new branch] gh/drisspg/128/head -> origin/gh/drisspg/128/head 2025-03-17T17:41:35.9719783Z * [new branch] gh/drisspg/128/orig -> origin/gh/drisspg/128/orig 2025-03-17T17:41:35.9721278Z * [new branch] gh/drisspg/129/base -> origin/gh/drisspg/129/base 2025-03-17T17:41:35.9722164Z * [new branch] gh/drisspg/129/head -> origin/gh/drisspg/129/head 2025-03-17T17:41:35.9723121Z * [new branch] gh/drisspg/129/orig -> origin/gh/drisspg/129/orig 2025-03-17T17:41:35.9724466Z * [new branch] gh/drisspg/130/base -> origin/gh/drisspg/130/base 2025-03-17T17:41:35.9725364Z * [new branch] gh/drisspg/130/head -> origin/gh/drisspg/130/head 2025-03-17T17:41:35.9726306Z * [new branch] gh/drisspg/130/orig -> origin/gh/drisspg/130/orig 2025-03-17T17:41:35.9727701Z * [new branch] gh/drisspg/131/base -> origin/gh/drisspg/131/base 2025-03-17T17:41:35.9728583Z * [new branch] gh/drisspg/131/head -> origin/gh/drisspg/131/head 2025-03-17T17:41:35.9729533Z * [new branch] gh/drisspg/131/orig -> origin/gh/drisspg/131/orig 2025-03-17T17:41:35.9730924Z * [new branch] gh/drisspg/132/base -> origin/gh/drisspg/132/base 2025-03-17T17:41:35.9731947Z * [new branch] gh/drisspg/132/head -> origin/gh/drisspg/132/head 2025-03-17T17:41:35.9732788Z * [new branch] gh/drisspg/132/orig -> origin/gh/drisspg/132/orig 2025-03-17T17:41:35.9734128Z * [new branch] gh/drisspg/133/base -> origin/gh/drisspg/133/base 2025-03-17T17:41:35.9735056Z * [new branch] gh/drisspg/133/head -> origin/gh/drisspg/133/head 2025-03-17T17:41:35.9736028Z * [new branch] gh/drisspg/133/orig -> origin/gh/drisspg/133/orig 2025-03-17T17:41:35.9741396Z * [new branch] gh/drisspg/134/base -> origin/gh/drisspg/134/base 2025-03-17T17:41:35.9742412Z * [new branch] gh/drisspg/134/head -> origin/gh/drisspg/134/head 2025-03-17T17:41:35.9743471Z * [new branch] gh/drisspg/134/orig -> origin/gh/drisspg/134/orig 2025-03-17T17:41:35.9744929Z * [new branch] gh/drisspg/135/base -> origin/gh/drisspg/135/base 2025-03-17T17:41:35.9745815Z * [new branch] gh/drisspg/135/head -> origin/gh/drisspg/135/head 2025-03-17T17:41:35.9746835Z * [new branch] gh/drisspg/135/orig -> origin/gh/drisspg/135/orig 2025-03-17T17:41:35.9748284Z * [new branch] gh/drisspg/136/base -> origin/gh/drisspg/136/base 2025-03-17T17:41:35.9749245Z * [new branch] gh/drisspg/136/head -> origin/gh/drisspg/136/head 2025-03-17T17:41:35.9750174Z * [new branch] gh/drisspg/136/orig -> origin/gh/drisspg/136/orig 2025-03-17T17:41:35.9751638Z * [new branch] gh/drisspg/66/base -> origin/gh/drisspg/66/base 2025-03-17T17:41:35.9752509Z * [new branch] gh/drisspg/66/head -> origin/gh/drisspg/66/head 2025-03-17T17:41:35.9753484Z * [new branch] gh/drisspg/66/orig -> origin/gh/drisspg/66/orig 2025-03-17T17:41:35.9754885Z * [new branch] gh/drisspg/98/base -> origin/gh/drisspg/98/base 2025-03-17T17:41:35.9755794Z * [new branch] gh/drisspg/98/head -> origin/gh/drisspg/98/head 2025-03-17T17:41:35.9756792Z * [new branch] gh/drisspg/98/orig -> origin/gh/drisspg/98/orig 2025-03-17T17:41:35.9758558Z * [new branch] gh/eellison/554/base -> origin/gh/eellison/554/base 2025-03-17T17:41:35.9759584Z * [new branch] gh/eellison/554/head -> origin/gh/eellison/554/head 2025-03-17T17:41:35.9760503Z * [new branch] gh/eellison/554/orig -> origin/gh/eellison/554/orig 2025-03-17T17:41:35.9761929Z * [new branch] gh/eellison/555/base -> origin/gh/eellison/555/base 2025-03-17T17:41:35.9762853Z * [new branch] gh/eellison/555/head -> origin/gh/eellison/555/head 2025-03-17T17:41:35.9763859Z * [new branch] gh/eellison/555/orig -> origin/gh/eellison/555/orig 2025-03-17T17:41:35.9765257Z * [new branch] gh/eellison/691/base -> origin/gh/eellison/691/base 2025-03-17T17:41:35.9766165Z * [new branch] gh/eellison/691/head -> origin/gh/eellison/691/head 2025-03-17T17:41:35.9767148Z * [new branch] gh/eellison/691/orig -> origin/gh/eellison/691/orig 2025-03-17T17:41:35.9768499Z * [new branch] gh/eellison/709/base -> origin/gh/eellison/709/base 2025-03-17T17:41:35.9769413Z * [new branch] gh/eellison/709/head -> origin/gh/eellison/709/head 2025-03-17T17:41:35.9770367Z * [new branch] gh/eellison/709/orig -> origin/gh/eellison/709/orig 2025-03-17T17:41:35.9771871Z * [new branch] gh/eellison/735/base -> origin/gh/eellison/735/base 2025-03-17T17:41:35.9772699Z * [new branch] gh/eellison/735/head -> origin/gh/eellison/735/head 2025-03-17T17:41:35.9773681Z * [new branch] gh/eellison/735/orig -> origin/gh/eellison/735/orig 2025-03-17T17:41:35.9775193Z * [new branch] gh/eellison/747/base -> origin/gh/eellison/747/base 2025-03-17T17:41:35.9776005Z * [new branch] gh/eellison/747/head -> origin/gh/eellison/747/head 2025-03-17T17:41:35.9776993Z * [new branch] gh/eellison/747/orig -> origin/gh/eellison/747/orig 2025-03-17T17:41:35.9779060Z * [new branch] gh/eellison/759/base -> origin/gh/eellison/759/base 2025-03-17T17:41:35.9780006Z * [new branch] gh/eellison/759/head -> origin/gh/eellison/759/head 2025-03-17T17:41:35.9780987Z * [new branch] gh/eellison/759/orig -> origin/gh/eellison/759/orig 2025-03-17T17:41:35.9782450Z * [new branch] gh/eellison/760/base -> origin/gh/eellison/760/base 2025-03-17T17:41:35.9783424Z * [new branch] gh/eellison/760/head -> origin/gh/eellison/760/head 2025-03-17T17:41:35.9784439Z * [new branch] gh/eellison/760/orig -> origin/gh/eellison/760/orig 2025-03-17T17:41:35.9785760Z * [new branch] gh/eellison/761/base -> origin/gh/eellison/761/base 2025-03-17T17:41:35.9786760Z * [new branch] gh/eellison/761/head -> origin/gh/eellison/761/head 2025-03-17T17:41:35.9787738Z * [new branch] gh/eellison/761/orig -> origin/gh/eellison/761/orig 2025-03-17T17:41:35.9789148Z * [new branch] gh/eellison/762/base -> origin/gh/eellison/762/base 2025-03-17T17:41:35.9790133Z * [new branch] gh/eellison/762/head -> origin/gh/eellison/762/head 2025-03-17T17:41:35.9791139Z * [new branch] gh/eellison/762/orig -> origin/gh/eellison/762/orig 2025-03-17T17:41:35.9792544Z * [new branch] gh/eellison/763/base -> origin/gh/eellison/763/base 2025-03-17T17:41:35.9793423Z * [new branch] gh/eellison/763/head -> origin/gh/eellison/763/head 2025-03-17T17:41:35.9794453Z * [new branch] gh/eellison/763/orig -> origin/gh/eellison/763/orig 2025-03-17T17:41:35.9795816Z * [new branch] gh/eellison/764/base -> origin/gh/eellison/764/base 2025-03-17T17:41:35.9796728Z * [new branch] gh/eellison/764/head -> origin/gh/eellison/764/head 2025-03-17T17:41:35.9797698Z * [new branch] gh/eellison/764/orig -> origin/gh/eellison/764/orig 2025-03-17T17:41:35.9799347Z * [new branch] gh/eellison/765/base -> origin/gh/eellison/765/base 2025-03-17T17:41:35.9800343Z * [new branch] gh/eellison/765/head -> origin/gh/eellison/765/head 2025-03-17T17:41:35.9801464Z * [new branch] gh/eellison/765/orig -> origin/gh/eellison/765/orig 2025-03-17T17:41:35.9802891Z * [new branch] gh/eellison/766/base -> origin/gh/eellison/766/base 2025-03-17T17:41:35.9803756Z * [new branch] gh/eellison/766/head -> origin/gh/eellison/766/head 2025-03-17T17:41:35.9804756Z * [new branch] gh/eellison/766/orig -> origin/gh/eellison/766/orig 2025-03-17T17:41:35.9806105Z * [new branch] gh/eellison/767/base -> origin/gh/eellison/767/base 2025-03-17T17:41:35.9807002Z * [new branch] gh/eellison/767/head -> origin/gh/eellison/767/head 2025-03-17T17:41:35.9807946Z * [new branch] gh/eellison/767/orig -> origin/gh/eellison/767/orig 2025-03-17T17:41:35.9809305Z * [new branch] gh/eellison/768/base -> origin/gh/eellison/768/base 2025-03-17T17:41:35.9810180Z * [new branch] gh/eellison/768/head -> origin/gh/eellison/768/head 2025-03-17T17:41:35.9811169Z * [new branch] gh/eellison/768/orig -> origin/gh/eellison/768/orig 2025-03-17T17:41:35.9813095Z * [new branch] gh/eellison/769/base -> origin/gh/eellison/769/base 2025-03-17T17:41:35.9813968Z * [new branch] gh/eellison/769/head -> origin/gh/eellison/769/head 2025-03-17T17:41:35.9815373Z * [new branch] gh/eellison/769/orig -> origin/gh/eellison/769/orig 2025-03-17T17:41:35.9816437Z * [new branch] gh/eellison/770/base -> origin/gh/eellison/770/base 2025-03-17T17:41:35.9817414Z * [new branch] gh/eellison/770/head -> origin/gh/eellison/770/head 2025-03-17T17:41:35.9818390Z * [new branch] gh/eellison/770/orig -> origin/gh/eellison/770/orig 2025-03-17T17:41:35.9819781Z * [new branch] gh/eellison/771/base -> origin/gh/eellison/771/base 2025-03-17T17:41:35.9820656Z * [new branch] gh/eellison/771/head -> origin/gh/eellison/771/head 2025-03-17T17:41:35.9821683Z * [new branch] gh/eellison/771/orig -> origin/gh/eellison/771/orig 2025-03-17T17:41:35.9823273Z * [new branch] gh/etaf/100/base -> origin/gh/etaf/100/base 2025-03-17T17:41:35.9824249Z * [new branch] gh/etaf/100/head -> origin/gh/etaf/100/head 2025-03-17T17:41:35.9825235Z * [new branch] gh/etaf/100/orig -> origin/gh/etaf/100/orig 2025-03-17T17:41:35.9826843Z * [new branch] gh/etaf/101/base -> origin/gh/etaf/101/base 2025-03-17T17:41:35.9827807Z * [new branch] gh/etaf/101/head -> origin/gh/etaf/101/head 2025-03-17T17:41:35.9828754Z * [new branch] gh/etaf/101/orig -> origin/gh/etaf/101/orig 2025-03-17T17:41:35.9830463Z * [new branch] gh/etaf/102/base -> origin/gh/etaf/102/base 2025-03-17T17:41:35.9831315Z * [new branch] gh/etaf/102/head -> origin/gh/etaf/102/head 2025-03-17T17:41:35.9832294Z * [new branch] gh/etaf/102/orig -> origin/gh/etaf/102/orig 2025-03-17T17:41:35.9833551Z * [new branch] gh/etaf/103/base -> origin/gh/etaf/103/base 2025-03-17T17:41:35.9834591Z * [new branch] gh/etaf/103/head -> origin/gh/etaf/103/head 2025-03-17T17:41:35.9835569Z * [new branch] gh/etaf/103/orig -> origin/gh/etaf/103/orig 2025-03-17T17:41:35.9836980Z * [new branch] gh/etaf/104/base -> origin/gh/etaf/104/base 2025-03-17T17:41:35.9837956Z * [new branch] gh/etaf/104/head -> origin/gh/etaf/104/head 2025-03-17T17:41:35.9838939Z * [new branch] gh/etaf/104/orig -> origin/gh/etaf/104/orig 2025-03-17T17:41:35.9840491Z * [new branch] gh/etaf/105/base -> origin/gh/etaf/105/base 2025-03-17T17:41:35.9841386Z * [new branch] gh/etaf/105/head -> origin/gh/etaf/105/head 2025-03-17T17:41:35.9842390Z * [new branch] gh/etaf/105/orig -> origin/gh/etaf/105/orig 2025-03-17T17:41:35.9843776Z * [new branch] gh/etaf/106/base -> origin/gh/etaf/106/base 2025-03-17T17:41:35.9844701Z * [new branch] gh/etaf/106/head -> origin/gh/etaf/106/head 2025-03-17T17:41:35.9845718Z * [new branch] gh/etaf/106/orig -> origin/gh/etaf/106/orig 2025-03-17T17:41:35.9847104Z * [new branch] gh/etaf/107/base -> origin/gh/etaf/107/base 2025-03-17T17:41:35.9848054Z * [new branch] gh/etaf/107/head -> origin/gh/etaf/107/head 2025-03-17T17:41:35.9849029Z * [new branch] gh/etaf/107/orig -> origin/gh/etaf/107/orig 2025-03-17T17:41:35.9850428Z * [new branch] gh/etaf/108/base -> origin/gh/etaf/108/base 2025-03-17T17:41:35.9851306Z * [new branch] gh/etaf/108/head -> origin/gh/etaf/108/head 2025-03-17T17:41:35.9852340Z * [new branch] gh/etaf/108/orig -> origin/gh/etaf/108/orig 2025-03-17T17:41:35.9853814Z * [new branch] gh/etaf/109/base -> origin/gh/etaf/109/base 2025-03-17T17:41:35.9854676Z * [new branch] gh/etaf/109/head -> origin/gh/etaf/109/head 2025-03-17T17:41:35.9855825Z * [new branch] gh/etaf/109/orig -> origin/gh/etaf/109/orig 2025-03-17T17:41:35.9857113Z * [new branch] gh/etaf/110/base -> origin/gh/etaf/110/base 2025-03-17T17:41:35.9857970Z * [new branch] gh/etaf/110/head -> origin/gh/etaf/110/head 2025-03-17T17:41:35.9858946Z * [new branch] gh/etaf/110/orig -> origin/gh/etaf/110/orig 2025-03-17T17:41:35.9860327Z * [new branch] gh/etaf/64/base -> origin/gh/etaf/64/base 2025-03-17T17:41:35.9861189Z * [new branch] gh/etaf/64/head -> origin/gh/etaf/64/head 2025-03-17T17:41:35.9862175Z * [new branch] gh/etaf/64/orig -> origin/gh/etaf/64/orig 2025-03-17T17:41:35.9863597Z * [new branch] gh/etaf/68/base -> origin/gh/etaf/68/base 2025-03-17T17:41:35.9864408Z * [new branch] gh/etaf/68/head -> origin/gh/etaf/68/head 2025-03-17T17:41:35.9865374Z * [new branch] gh/etaf/68/orig -> origin/gh/etaf/68/orig 2025-03-17T17:41:35.9866760Z * [new branch] gh/etaf/69/base -> origin/gh/etaf/69/base 2025-03-17T17:41:35.9867677Z * [new branch] gh/etaf/69/head -> origin/gh/etaf/69/head 2025-03-17T17:41:35.9868671Z * [new branch] gh/etaf/69/orig -> origin/gh/etaf/69/orig 2025-03-17T17:41:35.9870137Z * [new branch] gh/etaf/84/base -> origin/gh/etaf/84/base 2025-03-17T17:41:35.9871041Z * [new branch] gh/etaf/84/head -> origin/gh/etaf/84/head 2025-03-17T17:41:35.9872073Z * [new branch] gh/etaf/84/orig -> origin/gh/etaf/84/orig 2025-03-17T17:41:35.9873822Z * [new branch] gh/etaf/95/base -> origin/gh/etaf/95/base 2025-03-17T17:41:35.9874551Z * [new branch] gh/etaf/95/head -> origin/gh/etaf/95/head 2025-03-17T17:41:35.9875529Z * [new branch] gh/etaf/95/orig -> origin/gh/etaf/95/orig 2025-03-17T17:41:35.9877003Z * [new branch] gh/etaf/96/base -> origin/gh/etaf/96/base 2025-03-17T17:41:35.9877955Z * [new branch] gh/etaf/96/head -> origin/gh/etaf/96/head 2025-03-17T17:41:35.9878931Z * [new branch] gh/etaf/96/orig -> origin/gh/etaf/96/orig 2025-03-17T17:41:35.9881188Z * [new branch] gh/etaf/97/base -> origin/gh/etaf/97/base 2025-03-17T17:41:35.9882136Z * [new branch] gh/etaf/97/head -> origin/gh/etaf/97/head 2025-03-17T17:41:35.9883297Z * [new branch] gh/etaf/97/orig -> origin/gh/etaf/97/orig 2025-03-17T17:41:35.9884660Z * [new branch] gh/etaf/98/base -> origin/gh/etaf/98/base 2025-03-17T17:41:35.9885559Z * [new branch] gh/etaf/98/head -> origin/gh/etaf/98/head 2025-03-17T17:41:35.9886563Z * [new branch] gh/etaf/98/orig -> origin/gh/etaf/98/orig 2025-03-17T17:41:35.9888302Z * [new branch] gh/etaf/99/base -> origin/gh/etaf/99/base 2025-03-17T17:41:35.9889211Z * [new branch] gh/etaf/99/head -> origin/gh/etaf/99/head 2025-03-17T17:41:35.9890131Z * [new branch] gh/etaf/99/orig -> origin/gh/etaf/99/orig 2025-03-17T17:41:35.9892037Z * [new branch] gh/ezyang/2374/base -> origin/gh/ezyang/2374/base 2025-03-17T17:41:35.9893028Z * [new branch] gh/ezyang/2374/head -> origin/gh/ezyang/2374/head 2025-03-17T17:41:35.9893997Z * [new branch] gh/ezyang/2374/orig -> origin/gh/ezyang/2374/orig 2025-03-17T17:41:35.9895363Z * [new branch] gh/ezyang/2449/orig -> origin/gh/ezyang/2449/orig 2025-03-17T17:41:35.9896637Z * [new branch] gh/ezyang/2479/next -> origin/gh/ezyang/2479/next 2025-03-17T17:41:35.9897921Z * [new branch] gh/ezyang/2480/next -> origin/gh/ezyang/2480/next 2025-03-17T17:41:35.9899281Z * [new branch] gh/ezyang/2973/base -> origin/gh/ezyang/2973/base 2025-03-17T17:41:35.9900162Z * [new branch] gh/ezyang/2973/head -> origin/gh/ezyang/2973/head 2025-03-17T17:41:35.9901367Z * [new branch] gh/ezyang/2973/orig -> origin/gh/ezyang/2973/orig 2025-03-17T17:41:35.9902932Z * [new branch] gh/ezyang/2974/base -> origin/gh/ezyang/2974/base 2025-03-17T17:41:35.9903855Z * [new branch] gh/ezyang/2974/head -> origin/gh/ezyang/2974/head 2025-03-17T17:41:35.9904922Z * [new branch] gh/ezyang/2974/orig -> origin/gh/ezyang/2974/orig 2025-03-17T17:41:35.9906399Z * [new branch] gh/ezyang/2997/base -> origin/gh/ezyang/2997/base 2025-03-17T17:41:35.9907365Z * [new branch] gh/ezyang/2997/head -> origin/gh/ezyang/2997/head 2025-03-17T17:41:35.9908340Z * [new branch] gh/ezyang/2997/orig -> origin/gh/ezyang/2997/orig 2025-03-17T17:41:35.9909756Z * [new branch] gh/ezyang/3031/base -> origin/gh/ezyang/3031/base 2025-03-17T17:41:35.9910635Z * [new branch] gh/ezyang/3031/head -> origin/gh/ezyang/3031/head 2025-03-17T17:41:35.9912112Z * [new branch] gh/ezyang/3031/orig -> origin/gh/ezyang/3031/orig 2025-03-17T17:41:35.9913448Z * [new branch] gh/ezyang/3068/base -> origin/gh/ezyang/3068/base 2025-03-17T17:41:35.9914312Z * [new branch] gh/ezyang/3068/head -> origin/gh/ezyang/3068/head 2025-03-17T17:41:35.9915315Z * [new branch] gh/ezyang/3068/orig -> origin/gh/ezyang/3068/orig 2025-03-17T17:41:35.9916973Z * [new branch] gh/fadara01/1/base -> origin/gh/fadara01/1/base 2025-03-17T17:41:35.9917959Z * [new branch] gh/fadara01/1/head -> origin/gh/fadara01/1/head 2025-03-17T17:41:35.9918968Z * [new branch] gh/fadara01/1/orig -> origin/gh/fadara01/1/orig 2025-03-17T17:41:35.9920315Z * [new branch] gh/fadara01/2/base -> origin/gh/fadara01/2/base 2025-03-17T17:41:35.9921230Z * [new branch] gh/fadara01/2/head -> origin/gh/fadara01/2/head 2025-03-17T17:41:35.9922134Z * [new branch] gh/fadara01/2/orig -> origin/gh/fadara01/2/orig 2025-03-17T17:41:35.9923636Z * [new branch] gh/fadara01/3/base -> origin/gh/fadara01/3/base 2025-03-17T17:41:35.9924547Z * [new branch] gh/fadara01/3/head -> origin/gh/fadara01/3/head 2025-03-17T17:41:35.9925767Z * [new branch] gh/fadara01/3/orig -> origin/gh/fadara01/3/orig 2025-03-17T17:41:35.9927000Z * [new branch] gh/fadara01/4/base -> origin/gh/fadara01/4/base 2025-03-17T17:41:35.9927914Z * [new branch] gh/fadara01/4/head -> origin/gh/fadara01/4/head 2025-03-17T17:41:35.9928939Z * [new branch] gh/fadara01/4/orig -> origin/gh/fadara01/4/orig 2025-03-17T17:41:35.9930350Z * [new branch] gh/fadara01/5/base -> origin/gh/fadara01/5/base 2025-03-17T17:41:35.9931264Z * [new branch] gh/fadara01/5/head -> origin/gh/fadara01/5/head 2025-03-17T17:41:35.9932344Z * [new branch] gh/fadara01/5/orig -> origin/gh/fadara01/5/orig 2025-03-17T17:41:35.9933682Z * [new branch] gh/fadara01/6/base -> origin/gh/fadara01/6/base 2025-03-17T17:41:35.9934556Z * [new branch] gh/fadara01/6/head -> origin/gh/fadara01/6/head 2025-03-17T17:41:35.9935583Z * [new branch] gh/fadara01/6/orig -> origin/gh/fadara01/6/orig 2025-03-17T17:41:35.9937034Z * [new branch] gh/fadara01/7/base -> origin/gh/fadara01/7/base 2025-03-17T17:41:35.9938031Z * [new branch] gh/fadara01/7/head -> origin/gh/fadara01/7/head 2025-03-17T17:41:35.9939192Z * [new branch] gh/fadara01/7/orig -> origin/gh/fadara01/7/orig 2025-03-17T17:41:35.9940681Z * [new branch] gh/fduwjj/111/base -> origin/gh/fduwjj/111/base 2025-03-17T17:41:35.9941522Z * [new branch] gh/fduwjj/111/head -> origin/gh/fduwjj/111/head 2025-03-17T17:41:35.9942498Z * [new branch] gh/fduwjj/111/orig -> origin/gh/fduwjj/111/orig 2025-03-17T17:41:35.9943996Z * [new branch] gh/fduwjj/112/base -> origin/gh/fduwjj/112/base 2025-03-17T17:41:35.9944883Z * [new branch] gh/fduwjj/112/head -> origin/gh/fduwjj/112/head 2025-03-17T17:41:35.9945855Z * [new branch] gh/fduwjj/112/orig -> origin/gh/fduwjj/112/orig 2025-03-17T17:41:35.9947317Z * [new branch] gh/fduwjj/113/base -> origin/gh/fduwjj/113/base 2025-03-17T17:41:35.9948199Z * [new branch] gh/fduwjj/113/head -> origin/gh/fduwjj/113/head 2025-03-17T17:41:35.9949184Z * [new branch] gh/fduwjj/113/orig -> origin/gh/fduwjj/113/orig 2025-03-17T17:41:35.9950939Z * [new branch] gh/fegin/148/base -> origin/gh/fegin/148/base 2025-03-17T17:41:35.9951866Z * [new branch] gh/fegin/148/head -> origin/gh/fegin/148/head 2025-03-17T17:41:35.9952949Z * [new branch] gh/fegin/148/orig -> origin/gh/fegin/148/orig 2025-03-17T17:41:35.9954360Z * [new branch] gh/fegin/159/base -> origin/gh/fegin/159/base 2025-03-17T17:41:35.9955284Z * [new branch] gh/fegin/159/head -> origin/gh/fegin/159/head 2025-03-17T17:41:35.9956539Z * [new branch] gh/fegin/159/orig -> origin/gh/fegin/159/orig 2025-03-17T17:41:35.9957890Z * [new branch] gh/fegin/160/base -> origin/gh/fegin/160/base 2025-03-17T17:41:35.9958725Z * [new branch] gh/fegin/160/head -> origin/gh/fegin/160/head 2025-03-17T17:41:35.9959690Z * [new branch] gh/fegin/160/orig -> origin/gh/fegin/160/orig 2025-03-17T17:41:35.9961221Z * [new branch] gh/fegin/169/base -> origin/gh/fegin/169/base 2025-03-17T17:41:35.9962185Z * [new branch] gh/fegin/169/head -> origin/gh/fegin/169/head 2025-03-17T17:41:35.9963283Z * [new branch] gh/fegin/169/orig -> origin/gh/fegin/169/orig 2025-03-17T17:41:35.9964758Z * [new branch] gh/fegin/171/base -> origin/gh/fegin/171/base 2025-03-17T17:41:35.9965713Z * [new branch] gh/fegin/171/head -> origin/gh/fegin/171/head 2025-03-17T17:41:35.9966780Z * [new branch] gh/fegin/171/orig -> origin/gh/fegin/171/orig 2025-03-17T17:41:35.9968225Z * [new branch] gh/fegin/172/base -> origin/gh/fegin/172/base 2025-03-17T17:41:35.9969246Z * [new branch] gh/fegin/172/head -> origin/gh/fegin/172/head 2025-03-17T17:41:35.9970251Z * [new branch] gh/fegin/172/orig -> origin/gh/fegin/172/orig 2025-03-17T17:41:35.9971690Z * [new branch] gh/fegin/294/base -> origin/gh/fegin/294/base 2025-03-17T17:41:35.9972592Z * [new branch] gh/fegin/294/head -> origin/gh/fegin/294/head 2025-03-17T17:41:35.9973546Z * [new branch] gh/fegin/294/orig -> origin/gh/fegin/294/orig 2025-03-17T17:41:35.9974928Z * [new branch] gh/fegin/295/base -> origin/gh/fegin/295/base 2025-03-17T17:41:35.9975828Z * [new branch] gh/fegin/295/head -> origin/gh/fegin/295/head 2025-03-17T17:41:35.9976827Z * [new branch] gh/fegin/295/orig -> origin/gh/fegin/295/orig 2025-03-17T17:41:35.9978191Z * [new branch] gh/fegin/296/base -> origin/gh/fegin/296/base 2025-03-17T17:41:35.9979086Z * [new branch] gh/fegin/296/head -> origin/gh/fegin/296/head 2025-03-17T17:41:35.9980159Z * [new branch] gh/fegin/296/orig -> origin/gh/fegin/296/orig 2025-03-17T17:41:35.9981436Z * [new branch] gh/fegin/297/base -> origin/gh/fegin/297/base 2025-03-17T17:41:35.9982297Z * [new branch] gh/fegin/297/head -> origin/gh/fegin/297/head 2025-03-17T17:41:35.9983331Z * [new branch] gh/fegin/297/orig -> origin/gh/fegin/297/orig 2025-03-17T17:41:35.9984671Z * [new branch] gh/fegin/298/base -> origin/gh/fegin/298/base 2025-03-17T17:41:35.9985588Z * [new branch] gh/fegin/298/head -> origin/gh/fegin/298/head 2025-03-17T17:41:35.9986653Z * [new branch] gh/fegin/298/orig -> origin/gh/fegin/298/orig 2025-03-17T17:41:35.9988376Z * [new branch] gh/fegin/299/base -> origin/gh/fegin/299/base 2025-03-17T17:41:35.9989059Z * [new branch] gh/fegin/299/head -> origin/gh/fegin/299/head 2025-03-17T17:41:35.9990006Z * [new branch] gh/fegin/299/orig -> origin/gh/fegin/299/orig 2025-03-17T17:41:35.9991773Z * [new branch] gh/fffrog/28/base -> origin/gh/fffrog/28/base 2025-03-17T17:41:35.9992645Z * [new branch] gh/fffrog/28/head -> origin/gh/fffrog/28/head 2025-03-17T17:41:35.9993749Z * [new branch] gh/fffrog/28/orig -> origin/gh/fffrog/28/orig 2025-03-17T17:41:35.9995055Z * [new branch] gh/fffrog/37/base -> origin/gh/fffrog/37/base 2025-03-17T17:41:35.9996078Z * [new branch] gh/fffrog/37/head -> origin/gh/fffrog/37/head 2025-03-17T17:41:35.9996875Z * [new branch] gh/fffrog/37/orig -> origin/gh/fffrog/37/orig 2025-03-17T17:41:35.9998203Z * [new branch] gh/fffrog/38/base -> origin/gh/fffrog/38/base 2025-03-17T17:41:35.9999134Z * [new branch] gh/fffrog/38/head -> origin/gh/fffrog/38/head 2025-03-17T17:41:36.0000089Z * [new branch] gh/fffrog/38/orig -> origin/gh/fffrog/38/orig 2025-03-17T17:41:36.0001402Z * [new branch] gh/fffrog/39/base -> origin/gh/fffrog/39/base 2025-03-17T17:41:36.0002440Z * [new branch] gh/fffrog/39/head -> origin/gh/fffrog/39/head 2025-03-17T17:41:36.0003621Z * [new branch] gh/fffrog/39/orig -> origin/gh/fffrog/39/orig 2025-03-17T17:41:36.0005219Z * [new branch] gh/fffrog/40/base -> origin/gh/fffrog/40/base 2025-03-17T17:41:36.0006015Z * [new branch] gh/fffrog/40/head -> origin/gh/fffrog/40/head 2025-03-17T17:41:36.0007176Z * [new branch] gh/fffrog/40/orig -> origin/gh/fffrog/40/orig 2025-03-17T17:41:36.0008409Z * [new branch] gh/fffrog/41/base -> origin/gh/fffrog/41/base 2025-03-17T17:41:36.0009317Z * [new branch] gh/fffrog/41/head -> origin/gh/fffrog/41/head 2025-03-17T17:41:36.0010325Z * [new branch] gh/fffrog/41/orig -> origin/gh/fffrog/41/orig 2025-03-17T17:41:36.0011709Z * [new branch] gh/fffrog/42/base -> origin/gh/fffrog/42/base 2025-03-17T17:41:36.0012573Z * [new branch] gh/fffrog/42/head -> origin/gh/fffrog/42/head 2025-03-17T17:41:36.0013595Z * [new branch] gh/fffrog/42/orig -> origin/gh/fffrog/42/orig 2025-03-17T17:41:36.0015052Z * [new branch] gh/fffrog/43/base -> origin/gh/fffrog/43/base 2025-03-17T17:41:36.0015781Z * [new branch] gh/fffrog/43/head -> origin/gh/fffrog/43/head 2025-03-17T17:41:36.0016766Z * [new branch] gh/fffrog/43/orig -> origin/gh/fffrog/43/orig 2025-03-17T17:41:36.0018131Z * [new branch] gh/fffrog/44/base -> origin/gh/fffrog/44/base 2025-03-17T17:41:36.0019023Z * [new branch] gh/fffrog/44/head -> origin/gh/fffrog/44/head 2025-03-17T17:41:36.0020085Z * [new branch] gh/fffrog/44/orig -> origin/gh/fffrog/44/orig 2025-03-17T17:41:36.0021334Z * [new branch] gh/fffrog/45/base -> origin/gh/fffrog/45/base 2025-03-17T17:41:36.0022232Z * [new branch] gh/fffrog/45/head -> origin/gh/fffrog/45/head 2025-03-17T17:41:36.0023225Z * [new branch] gh/fffrog/45/orig -> origin/gh/fffrog/45/orig 2025-03-17T17:41:36.0024614Z * [new branch] gh/fffrog/46/base -> origin/gh/fffrog/46/base 2025-03-17T17:41:36.0026056Z * [new branch] gh/fffrog/46/head -> origin/gh/fffrog/46/head 2025-03-17T17:41:36.0027139Z * [new branch] gh/fffrog/46/orig -> origin/gh/fffrog/46/orig 2025-03-17T17:41:36.0028405Z * [new branch] gh/fffrog/47/base -> origin/gh/fffrog/47/base 2025-03-17T17:41:36.0029386Z * [new branch] gh/fffrog/47/head -> origin/gh/fffrog/47/head 2025-03-17T17:41:36.0030434Z * [new branch] gh/fffrog/47/orig -> origin/gh/fffrog/47/orig 2025-03-17T17:41:36.0031735Z * [new branch] gh/fffrog/48/base -> origin/gh/fffrog/48/base 2025-03-17T17:41:36.0032683Z * [new branch] gh/fffrog/48/head -> origin/gh/fffrog/48/head 2025-03-17T17:41:36.0033864Z * [new branch] gh/fffrog/48/orig -> origin/gh/fffrog/48/orig 2025-03-17T17:41:36.0035573Z * [new branch] gh/fffrog/49/base -> origin/gh/fffrog/49/base 2025-03-17T17:41:36.0036172Z * [new branch] gh/fffrog/49/head -> origin/gh/fffrog/49/head 2025-03-17T17:41:36.0037251Z * [new branch] gh/fffrog/49/orig -> origin/gh/fffrog/49/orig 2025-03-17T17:41:36.0039126Z * [new branch] gh/fffrog/50/base -> origin/gh/fffrog/50/base 2025-03-17T17:41:36.0039974Z * [new branch] gh/fffrog/50/head -> origin/gh/fffrog/50/head 2025-03-17T17:41:36.0041003Z * [new branch] gh/fffrog/50/orig -> origin/gh/fffrog/50/orig 2025-03-17T17:41:36.0042324Z * [new branch] gh/fffrog/51/base -> origin/gh/fffrog/51/base 2025-03-17T17:41:36.0043220Z * [new branch] gh/fffrog/51/head -> origin/gh/fffrog/51/head 2025-03-17T17:41:36.0044199Z * [new branch] gh/fffrog/51/orig -> origin/gh/fffrog/51/orig 2025-03-17T17:41:36.0045953Z * [new branch] gh/guangyey/118/base -> origin/gh/guangyey/118/base 2025-03-17T17:41:36.0046819Z * [new branch] gh/guangyey/118/head -> origin/gh/guangyey/118/head 2025-03-17T17:41:36.0047772Z * [new branch] gh/guangyey/118/orig -> origin/gh/guangyey/118/orig 2025-03-17T17:41:36.0049187Z * [new branch] gh/guangyey/123/base -> origin/gh/guangyey/123/base 2025-03-17T17:41:36.0050041Z * [new branch] gh/guangyey/123/head -> origin/gh/guangyey/123/head 2025-03-17T17:41:36.0051062Z * [new branch] gh/guangyey/123/orig -> origin/gh/guangyey/123/orig 2025-03-17T17:41:36.0052470Z * [new branch] gh/guangyey/124/base -> origin/gh/guangyey/124/base 2025-03-17T17:41:36.0053472Z * [new branch] gh/guangyey/124/head -> origin/gh/guangyey/124/head 2025-03-17T17:41:36.0054317Z * [new branch] gh/guangyey/124/orig -> origin/gh/guangyey/124/orig 2025-03-17T17:41:36.0055712Z * [new branch] gh/guangyey/125/base -> origin/gh/guangyey/125/base 2025-03-17T17:41:36.0056655Z * [new branch] gh/guangyey/125/head -> origin/gh/guangyey/125/head 2025-03-17T17:41:36.0057635Z * [new branch] gh/guangyey/125/orig -> origin/gh/guangyey/125/orig 2025-03-17T17:41:36.0059048Z * [new branch] gh/guangyey/126/base -> origin/gh/guangyey/126/base 2025-03-17T17:41:36.0059983Z * [new branch] gh/guangyey/126/head -> origin/gh/guangyey/126/head 2025-03-17T17:41:36.0061092Z * [new branch] gh/guangyey/126/orig -> origin/gh/guangyey/126/orig 2025-03-17T17:41:36.0062305Z * [new branch] gh/guangyey/127/base -> origin/gh/guangyey/127/base 2025-03-17T17:41:36.0063297Z * [new branch] gh/guangyey/127/head -> origin/gh/guangyey/127/head 2025-03-17T17:41:36.0064166Z * [new branch] gh/guangyey/127/orig -> origin/gh/guangyey/127/orig 2025-03-17T17:41:36.0065552Z * [new branch] gh/guangyey/128/base -> origin/gh/guangyey/128/base 2025-03-17T17:41:36.0066480Z * [new branch] gh/guangyey/128/head -> origin/gh/guangyey/128/head 2025-03-17T17:41:36.0067535Z * [new branch] gh/guangyey/128/orig -> origin/gh/guangyey/128/orig 2025-03-17T17:41:36.0068873Z * [new branch] gh/guangyey/71/base -> origin/gh/guangyey/71/base 2025-03-17T17:41:36.0069762Z * [new branch] gh/guangyey/71/head -> origin/gh/guangyey/71/head 2025-03-17T17:41:36.0070737Z * [new branch] gh/guangyey/71/orig -> origin/gh/guangyey/71/orig 2025-03-17T17:41:36.0072107Z * [new branch] gh/guangyey/79/base -> origin/gh/guangyey/79/base 2025-03-17T17:41:36.0072991Z * [new branch] gh/guangyey/79/head -> origin/gh/guangyey/79/head 2025-03-17T17:41:36.0073922Z * [new branch] gh/guangyey/79/orig -> origin/gh/guangyey/79/orig 2025-03-17T17:41:36.0075325Z * [new branch] gh/guangyey/87/base -> origin/gh/guangyey/87/base 2025-03-17T17:41:36.0076226Z * [new branch] gh/guangyey/87/head -> origin/gh/guangyey/87/head 2025-03-17T17:41:36.0077220Z * [new branch] gh/guangyey/87/orig -> origin/gh/guangyey/87/orig 2025-03-17T17:41:36.0078551Z * [new branch] gh/guangyey/89/base -> origin/gh/guangyey/89/base 2025-03-17T17:41:36.0079484Z * [new branch] gh/guangyey/89/head -> origin/gh/guangyey/89/head 2025-03-17T17:41:36.0080406Z * [new branch] gh/guangyey/89/orig -> origin/gh/guangyey/89/orig 2025-03-17T17:41:36.0082179Z * [new branch] gh/guilhermeleobas/100/base -> origin/gh/guilhermeleobas/100/base 2025-03-17T17:41:36.0083101Z * [new branch] gh/guilhermeleobas/100/head -> origin/gh/guilhermeleobas/100/head 2025-03-17T17:41:36.0084076Z * [new branch] gh/guilhermeleobas/100/orig -> origin/gh/guilhermeleobas/100/orig 2025-03-17T17:41:36.0085503Z * [new branch] gh/guilhermeleobas/101/base -> origin/gh/guilhermeleobas/101/base 2025-03-17T17:41:36.0086409Z * [new branch] gh/guilhermeleobas/101/head -> origin/gh/guilhermeleobas/101/head 2025-03-17T17:41:36.0087433Z * [new branch] gh/guilhermeleobas/101/orig -> origin/gh/guilhermeleobas/101/orig 2025-03-17T17:41:36.0088792Z * [new branch] gh/guilhermeleobas/102/base -> origin/gh/guilhermeleobas/102/base 2025-03-17T17:41:36.0089710Z * [new branch] gh/guilhermeleobas/102/head -> origin/gh/guilhermeleobas/102/head 2025-03-17T17:41:36.0090678Z * [new branch] gh/guilhermeleobas/102/orig -> origin/gh/guilhermeleobas/102/orig 2025-03-17T17:41:36.0091958Z * [new branch] gh/guilhermeleobas/103/base -> origin/gh/guilhermeleobas/103/base 2025-03-17T17:41:36.0092918Z * [new branch] gh/guilhermeleobas/103/head -> origin/gh/guilhermeleobas/103/head 2025-03-17T17:41:36.0093906Z * [new branch] gh/guilhermeleobas/103/orig -> origin/gh/guilhermeleobas/103/orig 2025-03-17T17:41:36.0095532Z * [new branch] gh/guilhermeleobas/104/base -> origin/gh/guilhermeleobas/104/base 2025-03-17T17:41:36.0096462Z * [new branch] gh/guilhermeleobas/104/head -> origin/gh/guilhermeleobas/104/head 2025-03-17T17:41:36.0097447Z * [new branch] gh/guilhermeleobas/104/orig -> origin/gh/guilhermeleobas/104/orig 2025-03-17T17:41:36.0098971Z * [new branch] gh/guilhermeleobas/105/base -> origin/gh/guilhermeleobas/105/base 2025-03-17T17:41:36.0099801Z * [new branch] gh/guilhermeleobas/105/head -> origin/gh/guilhermeleobas/105/head 2025-03-17T17:41:36.0100804Z * [new branch] gh/guilhermeleobas/105/orig -> origin/gh/guilhermeleobas/105/orig 2025-03-17T17:41:36.0102428Z * [new branch] gh/guilhermeleobas/106/base -> origin/gh/guilhermeleobas/106/base 2025-03-17T17:41:36.0103357Z * [new branch] gh/guilhermeleobas/106/head -> origin/gh/guilhermeleobas/106/head 2025-03-17T17:41:36.0104448Z * [new branch] gh/guilhermeleobas/106/orig -> origin/gh/guilhermeleobas/106/orig 2025-03-17T17:41:36.0105946Z * [new branch] gh/guilhermeleobas/107/base -> origin/gh/guilhermeleobas/107/base 2025-03-17T17:41:36.0107297Z * [new branch] gh/guilhermeleobas/107/head -> origin/gh/guilhermeleobas/107/head 2025-03-17T17:41:36.0108266Z * [new branch] gh/guilhermeleobas/107/orig -> origin/gh/guilhermeleobas/107/orig 2025-03-17T17:41:36.0109622Z * [new branch] gh/guilhermeleobas/108/base -> origin/gh/guilhermeleobas/108/base 2025-03-17T17:41:36.0110586Z * [new branch] gh/guilhermeleobas/108/head -> origin/gh/guilhermeleobas/108/head 2025-03-17T17:41:36.0111546Z * [new branch] gh/guilhermeleobas/108/orig -> origin/gh/guilhermeleobas/108/orig 2025-03-17T17:41:36.0112902Z * [new branch] gh/guilhermeleobas/109/base -> origin/gh/guilhermeleobas/109/base 2025-03-17T17:41:36.0113817Z * [new branch] gh/guilhermeleobas/109/head -> origin/gh/guilhermeleobas/109/head 2025-03-17T17:41:36.0114814Z * [new branch] gh/guilhermeleobas/109/orig -> origin/gh/guilhermeleobas/109/orig 2025-03-17T17:41:36.0116194Z * [new branch] gh/guilhermeleobas/11/base -> origin/gh/guilhermeleobas/11/base 2025-03-17T17:41:36.0117103Z * [new branch] gh/guilhermeleobas/11/head -> origin/gh/guilhermeleobas/11/head 2025-03-17T17:41:36.0118117Z * [new branch] gh/guilhermeleobas/11/orig -> origin/gh/guilhermeleobas/11/orig 2025-03-17T17:41:36.0119478Z * [new branch] gh/guilhermeleobas/110/base -> origin/gh/guilhermeleobas/110/base 2025-03-17T17:41:36.0120444Z * [new branch] gh/guilhermeleobas/110/head -> origin/gh/guilhermeleobas/110/head 2025-03-17T17:41:36.0121427Z * [new branch] gh/guilhermeleobas/110/orig -> origin/gh/guilhermeleobas/110/orig 2025-03-17T17:41:36.0124252Z * [new branch] gh/guilhermeleobas/111/base -> origin/gh/guilhermeleobas/111/base 2025-03-17T17:41:36.0125073Z * [new branch] gh/guilhermeleobas/111/head -> origin/gh/guilhermeleobas/111/head 2025-03-17T17:41:36.0126020Z * [new branch] gh/guilhermeleobas/111/orig -> origin/gh/guilhermeleobas/111/orig 2025-03-17T17:41:36.0127384Z * [new branch] gh/guilhermeleobas/73/base -> origin/gh/guilhermeleobas/73/base 2025-03-17T17:41:36.0128325Z * [new branch] gh/guilhermeleobas/73/head -> origin/gh/guilhermeleobas/73/head 2025-03-17T17:41:36.0129278Z * [new branch] gh/guilhermeleobas/73/orig -> origin/gh/guilhermeleobas/73/orig 2025-03-17T17:41:36.0130665Z * [new branch] gh/guilhermeleobas/92/base -> origin/gh/guilhermeleobas/92/base 2025-03-17T17:41:36.0131607Z * [new branch] gh/guilhermeleobas/92/head -> origin/gh/guilhermeleobas/92/head 2025-03-17T17:41:36.0132560Z * [new branch] gh/guilhermeleobas/92/orig -> origin/gh/guilhermeleobas/92/orig 2025-03-17T17:41:36.0134013Z * [new branch] gh/guilhermeleobas/93/base -> origin/gh/guilhermeleobas/93/base 2025-03-17T17:41:36.0134958Z * [new branch] gh/guilhermeleobas/93/head -> origin/gh/guilhermeleobas/93/head 2025-03-17T17:41:36.0135903Z * [new branch] gh/guilhermeleobas/93/orig -> origin/gh/guilhermeleobas/93/orig 2025-03-17T17:41:36.0137567Z * [new branch] gh/guilhermeleobas/94/base -> origin/gh/guilhermeleobas/94/base 2025-03-17T17:41:36.0138419Z * [new branch] gh/guilhermeleobas/94/head -> origin/gh/guilhermeleobas/94/head 2025-03-17T17:41:36.0139395Z * [new branch] gh/guilhermeleobas/94/orig -> origin/gh/guilhermeleobas/94/orig 2025-03-17T17:41:36.0141201Z * [new branch] gh/guilhermeleobas/95/base -> origin/gh/guilhermeleobas/95/base 2025-03-17T17:41:36.0142137Z * [new branch] gh/guilhermeleobas/95/head -> origin/gh/guilhermeleobas/95/head 2025-03-17T17:41:36.0143125Z * [new branch] gh/guilhermeleobas/95/orig -> origin/gh/guilhermeleobas/95/orig 2025-03-17T17:41:36.0144573Z * [new branch] gh/guilhermeleobas/97/base -> origin/gh/guilhermeleobas/97/base 2025-03-17T17:41:36.0145545Z * [new branch] gh/guilhermeleobas/97/head -> origin/gh/guilhermeleobas/97/head 2025-03-17T17:41:36.0146550Z * [new branch] gh/guilhermeleobas/97/orig -> origin/gh/guilhermeleobas/97/orig 2025-03-17T17:41:36.0147989Z * [new branch] gh/guilhermeleobas/98/base -> origin/gh/guilhermeleobas/98/base 2025-03-17T17:41:36.0148831Z * [new branch] gh/guilhermeleobas/98/head -> origin/gh/guilhermeleobas/98/head 2025-03-17T17:41:36.0149815Z * [new branch] gh/guilhermeleobas/98/orig -> origin/gh/guilhermeleobas/98/orig 2025-03-17T17:41:36.0151206Z * [new branch] gh/guilhermeleobas/99/base -> origin/gh/guilhermeleobas/99/base 2025-03-17T17:41:36.0152096Z * [new branch] gh/guilhermeleobas/99/head -> origin/gh/guilhermeleobas/99/head 2025-03-17T17:41:36.0153043Z * [new branch] gh/guilhermeleobas/99/orig -> origin/gh/guilhermeleobas/99/orig 2025-03-17T17:41:36.0154821Z * [new branch] gh/henrylhtsang/10/base -> origin/gh/henrylhtsang/10/base 2025-03-17T17:41:36.0155776Z * [new branch] gh/henrylhtsang/10/head -> origin/gh/henrylhtsang/10/head 2025-03-17T17:41:36.0156741Z * [new branch] gh/henrylhtsang/10/orig -> origin/gh/henrylhtsang/10/orig 2025-03-17T17:41:36.0158173Z * [new branch] gh/henrylhtsang/11/base -> origin/gh/henrylhtsang/11/base 2025-03-17T17:41:36.0159087Z * [new branch] gh/henrylhtsang/11/head -> origin/gh/henrylhtsang/11/head 2025-03-17T17:41:36.0160067Z * [new branch] gh/henrylhtsang/11/orig -> origin/gh/henrylhtsang/11/orig 2025-03-17T17:41:36.0161559Z * [new branch] gh/henrylhtsang/12/base -> origin/gh/henrylhtsang/12/base 2025-03-17T17:41:36.0162491Z * [new branch] gh/henrylhtsang/12/head -> origin/gh/henrylhtsang/12/head 2025-03-17T17:41:36.0163444Z * [new branch] gh/henrylhtsang/12/orig -> origin/gh/henrylhtsang/12/orig 2025-03-17T17:41:36.0164986Z * [new branch] gh/henrylhtsang/13/base -> origin/gh/henrylhtsang/13/base 2025-03-17T17:41:36.0165947Z * [new branch] gh/henrylhtsang/13/head -> origin/gh/henrylhtsang/13/head 2025-03-17T17:41:36.0166954Z * [new branch] gh/henrylhtsang/13/orig -> origin/gh/henrylhtsang/13/orig 2025-03-17T17:41:36.0168509Z * [new branch] gh/henrylhtsang/14/base -> origin/gh/henrylhtsang/14/base 2025-03-17T17:41:36.0169294Z * [new branch] gh/henrylhtsang/14/head -> origin/gh/henrylhtsang/14/head 2025-03-17T17:41:36.0170407Z * [new branch] gh/henrylhtsang/14/orig -> origin/gh/henrylhtsang/14/orig 2025-03-17T17:41:36.0171719Z * [new branch] gh/henrylhtsang/15/base -> origin/gh/henrylhtsang/15/base 2025-03-17T17:41:36.0172570Z * [new branch] gh/henrylhtsang/15/head -> origin/gh/henrylhtsang/15/head 2025-03-17T17:41:36.0173544Z * [new branch] gh/henrylhtsang/15/orig -> origin/gh/henrylhtsang/15/orig 2025-03-17T17:41:36.0174963Z * [new branch] gh/henrylhtsang/16/base -> origin/gh/henrylhtsang/16/base 2025-03-17T17:41:36.0176138Z * [new branch] gh/henrylhtsang/16/head -> origin/gh/henrylhtsang/16/head 2025-03-17T17:41:36.0177553Z * [new branch] gh/henrylhtsang/16/orig -> origin/gh/henrylhtsang/16/orig 2025-03-17T17:41:36.0180245Z * [new branch] gh/henrylhtsang/17/base -> origin/gh/henrylhtsang/17/base 2025-03-17T17:41:36.0181468Z * [new branch] gh/henrylhtsang/17/head -> origin/gh/henrylhtsang/17/head 2025-03-17T17:41:36.0184071Z * [new branch] gh/henrylhtsang/17/orig -> origin/gh/henrylhtsang/17/orig 2025-03-17T17:41:36.0186297Z * [new branch] gh/henrylhtsang/18/base -> origin/gh/henrylhtsang/18/base 2025-03-17T17:41:36.0187113Z * [new branch] gh/henrylhtsang/18/head -> origin/gh/henrylhtsang/18/head 2025-03-17T17:41:36.0187936Z * [new branch] gh/henrylhtsang/18/orig -> origin/gh/henrylhtsang/18/orig 2025-03-17T17:41:36.0190026Z * [new branch] gh/henrylhtsang/19/base -> origin/gh/henrylhtsang/19/base 2025-03-17T17:41:36.0191675Z * [new branch] gh/henrylhtsang/19/head -> origin/gh/henrylhtsang/19/head 2025-03-17T17:41:36.0193225Z * [new branch] gh/henrylhtsang/19/orig -> origin/gh/henrylhtsang/19/orig 2025-03-17T17:41:36.0195302Z * [new branch] gh/henrylhtsang/20/base -> origin/gh/henrylhtsang/20/base 2025-03-17T17:41:36.0197109Z * [new branch] gh/henrylhtsang/20/head -> origin/gh/henrylhtsang/20/head 2025-03-17T17:41:36.0198078Z * [new branch] gh/henrylhtsang/20/orig -> origin/gh/henrylhtsang/20/orig 2025-03-17T17:41:36.0199227Z * [new branch] gh/henrylhtsang/21/base -> origin/gh/henrylhtsang/21/base 2025-03-17T17:41:36.0200289Z * [new branch] gh/henrylhtsang/21/head -> origin/gh/henrylhtsang/21/head 2025-03-17T17:41:36.0201449Z * [new branch] gh/henrylhtsang/21/orig -> origin/gh/henrylhtsang/21/orig 2025-03-17T17:41:36.0202486Z * [new branch] gh/henrylhtsang/22/base -> origin/gh/henrylhtsang/22/base 2025-03-17T17:41:36.0203492Z * [new branch] gh/henrylhtsang/22/head -> origin/gh/henrylhtsang/22/head 2025-03-17T17:41:36.0206625Z * [new branch] gh/henrylhtsang/22/orig -> origin/gh/henrylhtsang/22/orig 2025-03-17T17:41:36.0207256Z * [new branch] gh/henrylhtsang/23/base -> origin/gh/henrylhtsang/23/base 2025-03-17T17:41:36.0208003Z * [new branch] gh/henrylhtsang/23/head -> origin/gh/henrylhtsang/23/head 2025-03-17T17:41:36.0208701Z * [new branch] gh/henrylhtsang/23/orig -> origin/gh/henrylhtsang/23/orig 2025-03-17T17:41:36.0210043Z * [new branch] gh/henrylhtsang/24/base -> origin/gh/henrylhtsang/24/base 2025-03-17T17:41:36.0211072Z * [new branch] gh/henrylhtsang/24/head -> origin/gh/henrylhtsang/24/head 2025-03-17T17:41:36.0211976Z * [new branch] gh/henrylhtsang/24/orig -> origin/gh/henrylhtsang/24/orig 2025-03-17T17:41:36.0213380Z * [new branch] gh/henrylhtsang/25/base -> origin/gh/henrylhtsang/25/base 2025-03-17T17:41:36.0214377Z * [new branch] gh/henrylhtsang/25/head -> origin/gh/henrylhtsang/25/head 2025-03-17T17:41:36.0215427Z * [new branch] gh/henrylhtsang/25/orig -> origin/gh/henrylhtsang/25/orig 2025-03-17T17:41:36.0216897Z * [new branch] gh/henrylhtsang/26/base -> origin/gh/henrylhtsang/26/base 2025-03-17T17:41:36.0217810Z * [new branch] gh/henrylhtsang/26/head -> origin/gh/henrylhtsang/26/head 2025-03-17T17:41:36.0218777Z * [new branch] gh/henrylhtsang/26/orig -> origin/gh/henrylhtsang/26/orig 2025-03-17T17:41:36.0220282Z * [new branch] gh/henrylhtsang/27/base -> origin/gh/henrylhtsang/27/base 2025-03-17T17:41:36.0221134Z * [new branch] gh/henrylhtsang/27/head -> origin/gh/henrylhtsang/27/head 2025-03-17T17:41:36.0222194Z * [new branch] gh/henrylhtsang/27/orig -> origin/gh/henrylhtsang/27/orig 2025-03-17T17:41:36.0223982Z * [new branch] gh/henrylhtsang/28/base -> origin/gh/henrylhtsang/28/base 2025-03-17T17:41:36.0224912Z * [new branch] gh/henrylhtsang/28/head -> origin/gh/henrylhtsang/28/head 2025-03-17T17:41:36.0225956Z * [new branch] gh/henrylhtsang/28/orig -> origin/gh/henrylhtsang/28/orig 2025-03-17T17:41:36.0227568Z * [new branch] gh/henrylhtsang/29/base -> origin/gh/henrylhtsang/29/base 2025-03-17T17:41:36.0228592Z * [new branch] gh/henrylhtsang/29/head -> origin/gh/henrylhtsang/29/head 2025-03-17T17:41:36.0229556Z * [new branch] gh/henrylhtsang/29/orig -> origin/gh/henrylhtsang/29/orig 2025-03-17T17:41:36.0231000Z * [new branch] gh/henrylhtsang/3/base -> origin/gh/henrylhtsang/3/base 2025-03-17T17:41:36.0232004Z * [new branch] gh/henrylhtsang/3/head -> origin/gh/henrylhtsang/3/head 2025-03-17T17:41:36.0233346Z * [new branch] gh/henrylhtsang/3/orig -> origin/gh/henrylhtsang/3/orig 2025-03-17T17:41:36.0235651Z * [new branch] gh/henrylhtsang/30/base -> origin/gh/henrylhtsang/30/base 2025-03-17T17:41:36.0237645Z * [new branch] gh/henrylhtsang/30/head -> origin/gh/henrylhtsang/30/head 2025-03-17T17:41:36.0239234Z * [new branch] gh/henrylhtsang/30/orig -> origin/gh/henrylhtsang/30/orig 2025-03-17T17:41:36.0241518Z * [new branch] gh/henrylhtsang/31/base -> origin/gh/henrylhtsang/31/base 2025-03-17T17:41:36.0242956Z * [new branch] gh/henrylhtsang/31/head -> origin/gh/henrylhtsang/31/head 2025-03-17T17:41:36.0244557Z * [new branch] gh/henrylhtsang/31/orig -> origin/gh/henrylhtsang/31/orig 2025-03-17T17:41:36.0246825Z * [new branch] gh/henrylhtsang/32/base -> origin/gh/henrylhtsang/32/base 2025-03-17T17:41:36.0247661Z * [new branch] gh/henrylhtsang/32/head -> origin/gh/henrylhtsang/32/head 2025-03-17T17:41:36.0248838Z * [new branch] gh/henrylhtsang/32/orig -> origin/gh/henrylhtsang/32/orig 2025-03-17T17:41:36.0250300Z * [new branch] gh/henrylhtsang/33/base -> origin/gh/henrylhtsang/33/base 2025-03-17T17:41:36.0251403Z * [new branch] gh/henrylhtsang/33/head -> origin/gh/henrylhtsang/33/head 2025-03-17T17:41:36.0252434Z * [new branch] gh/henrylhtsang/33/orig -> origin/gh/henrylhtsang/33/orig 2025-03-17T17:41:36.0253963Z * [new branch] gh/henrylhtsang/34/base -> origin/gh/henrylhtsang/34/base 2025-03-17T17:41:36.0254954Z * [new branch] gh/henrylhtsang/34/head -> origin/gh/henrylhtsang/34/head 2025-03-17T17:41:36.0255975Z * [new branch] gh/henrylhtsang/34/orig -> origin/gh/henrylhtsang/34/orig 2025-03-17T17:41:36.0257384Z * [new branch] gh/henrylhtsang/35/base -> origin/gh/henrylhtsang/35/base 2025-03-17T17:41:36.0258309Z * [new branch] gh/henrylhtsang/35/head -> origin/gh/henrylhtsang/35/head 2025-03-17T17:41:36.0259297Z * [new branch] gh/henrylhtsang/35/orig -> origin/gh/henrylhtsang/35/orig 2025-03-17T17:41:36.0260857Z * [new branch] gh/henrylhtsang/36/base -> origin/gh/henrylhtsang/36/base 2025-03-17T17:41:36.0261765Z * [new branch] gh/henrylhtsang/36/head -> origin/gh/henrylhtsang/36/head 2025-03-17T17:41:36.0262718Z * [new branch] gh/henrylhtsang/36/orig -> origin/gh/henrylhtsang/36/orig 2025-03-17T17:41:36.0264185Z * [new branch] gh/henrylhtsang/37/base -> origin/gh/henrylhtsang/37/base 2025-03-17T17:41:36.0265119Z * [new branch] gh/henrylhtsang/37/head -> origin/gh/henrylhtsang/37/head 2025-03-17T17:41:36.0266101Z * [new branch] gh/henrylhtsang/37/orig -> origin/gh/henrylhtsang/37/orig 2025-03-17T17:41:36.0267768Z * [new branch] gh/henrylhtsang/38/base -> origin/gh/henrylhtsang/38/base 2025-03-17T17:41:36.0268618Z * [new branch] gh/henrylhtsang/38/head -> origin/gh/henrylhtsang/38/head 2025-03-17T17:41:36.0269577Z * [new branch] gh/henrylhtsang/38/orig -> origin/gh/henrylhtsang/38/orig 2025-03-17T17:41:36.0271508Z * [new branch] gh/henrylhtsang/39/base -> origin/gh/henrylhtsang/39/base 2025-03-17T17:41:36.0272458Z * [new branch] gh/henrylhtsang/39/head -> origin/gh/henrylhtsang/39/head 2025-03-17T17:41:36.0273458Z * [new branch] gh/henrylhtsang/39/orig -> origin/gh/henrylhtsang/39/orig 2025-03-17T17:41:36.0274838Z * [new branch] gh/henrylhtsang/4/base -> origin/gh/henrylhtsang/4/base 2025-03-17T17:41:36.0275803Z * [new branch] gh/henrylhtsang/4/head -> origin/gh/henrylhtsang/4/head 2025-03-17T17:41:36.0276793Z * [new branch] gh/henrylhtsang/4/orig -> origin/gh/henrylhtsang/4/orig 2025-03-17T17:41:36.0278228Z * [new branch] gh/henrylhtsang/40/base -> origin/gh/henrylhtsang/40/base 2025-03-17T17:41:36.0279109Z * [new branch] gh/henrylhtsang/40/head -> origin/gh/henrylhtsang/40/head 2025-03-17T17:41:36.0280088Z * [new branch] gh/henrylhtsang/40/orig -> origin/gh/henrylhtsang/40/orig 2025-03-17T17:41:36.0281295Z * [new branch] gh/henrylhtsang/41/base -> origin/gh/henrylhtsang/41/base 2025-03-17T17:41:36.0282739Z * [new branch] gh/henrylhtsang/41/head -> origin/gh/henrylhtsang/41/head 2025-03-17T17:41:36.0283449Z * [new branch] gh/henrylhtsang/41/orig -> origin/gh/henrylhtsang/41/orig 2025-03-17T17:41:36.0284573Z * [new branch] gh/henrylhtsang/42/base -> origin/gh/henrylhtsang/42/base 2025-03-17T17:41:36.0285673Z * [new branch] gh/henrylhtsang/42/head -> origin/gh/henrylhtsang/42/head 2025-03-17T17:41:36.0286830Z * [new branch] gh/henrylhtsang/42/orig -> origin/gh/henrylhtsang/42/orig 2025-03-17T17:41:36.0288030Z * [new branch] gh/henrylhtsang/5/base -> origin/gh/henrylhtsang/5/base 2025-03-17T17:41:36.0289056Z * [new branch] gh/henrylhtsang/5/head -> origin/gh/henrylhtsang/5/head 2025-03-17T17:41:36.0290028Z * [new branch] gh/henrylhtsang/5/orig -> origin/gh/henrylhtsang/5/orig 2025-03-17T17:41:36.0291548Z * [new branch] gh/henrylhtsang/6/base -> origin/gh/henrylhtsang/6/base 2025-03-17T17:41:36.0293022Z * [new branch] gh/henrylhtsang/6/head -> origin/gh/henrylhtsang/6/head 2025-03-17T17:41:36.0293999Z * [new branch] gh/henrylhtsang/6/orig -> origin/gh/henrylhtsang/6/orig 2025-03-17T17:41:36.0295431Z * [new branch] gh/henrylhtsang/7/base -> origin/gh/henrylhtsang/7/base 2025-03-17T17:41:36.0296305Z * [new branch] gh/henrylhtsang/7/head -> origin/gh/henrylhtsang/7/head 2025-03-17T17:41:36.0297280Z * [new branch] gh/henrylhtsang/7/orig -> origin/gh/henrylhtsang/7/orig 2025-03-17T17:41:36.0298504Z * [new branch] gh/henrylhtsang/8/base -> origin/gh/henrylhtsang/8/base 2025-03-17T17:41:36.0299607Z * [new branch] gh/henrylhtsang/8/head -> origin/gh/henrylhtsang/8/head 2025-03-17T17:41:36.0300602Z * [new branch] gh/henrylhtsang/8/orig -> origin/gh/henrylhtsang/8/orig 2025-03-17T17:41:36.0302098Z * [new branch] gh/henrylhtsang/9/base -> origin/gh/henrylhtsang/9/base 2025-03-17T17:41:36.0303049Z * [new branch] gh/henrylhtsang/9/head -> origin/gh/henrylhtsang/9/head 2025-03-17T17:41:36.0304035Z * [new branch] gh/henrylhtsang/9/orig -> origin/gh/henrylhtsang/9/orig 2025-03-17T17:41:36.0305661Z * [new branch] gh/int3/21/base -> origin/gh/int3/21/base 2025-03-17T17:41:36.0306831Z * [new branch] gh/int3/21/head -> origin/gh/int3/21/head 2025-03-17T17:41:36.0307845Z * [new branch] gh/int3/21/orig -> origin/gh/int3/21/orig 2025-03-17T17:41:36.0310007Z * [new branch] gh/int3/34/base -> origin/gh/int3/34/base 2025-03-17T17:41:36.0311062Z * [new branch] gh/int3/34/head -> origin/gh/int3/34/head 2025-03-17T17:41:36.0312125Z * [new branch] gh/int3/34/orig -> origin/gh/int3/34/orig 2025-03-17T17:41:36.0313506Z * [new branch] gh/int3/36/base -> origin/gh/int3/36/base 2025-03-17T17:41:36.0314472Z * [new branch] gh/int3/36/head -> origin/gh/int3/36/head 2025-03-17T17:41:36.0315441Z * [new branch] gh/int3/36/orig -> origin/gh/int3/36/orig 2025-03-17T17:41:36.0316926Z * [new branch] gh/int3/41/base -> origin/gh/int3/41/base 2025-03-17T17:41:36.0318004Z * [new branch] gh/int3/41/head -> origin/gh/int3/41/head 2025-03-17T17:41:36.0318965Z * [new branch] gh/int3/41/orig -> origin/gh/int3/41/orig 2025-03-17T17:41:36.0320429Z * [new branch] gh/int3/45/base -> origin/gh/int3/45/base 2025-03-17T17:41:36.0321441Z * [new branch] gh/int3/45/head -> origin/gh/int3/45/head 2025-03-17T17:41:36.0322472Z * [new branch] gh/int3/45/orig -> origin/gh/int3/45/orig 2025-03-17T17:41:36.0324009Z * [new branch] gh/int3/46/base -> origin/gh/int3/46/base 2025-03-17T17:41:36.0324943Z * [new branch] gh/int3/46/head -> origin/gh/int3/46/head 2025-03-17T17:41:36.0325909Z * [new branch] gh/int3/46/orig -> origin/gh/int3/46/orig 2025-03-17T17:41:36.0327361Z * [new branch] gh/int3/47/base -> origin/gh/int3/47/base 2025-03-17T17:41:36.0328315Z * [new branch] gh/int3/47/head -> origin/gh/int3/47/head 2025-03-17T17:41:36.0329291Z * [new branch] gh/int3/47/orig -> origin/gh/int3/47/orig 2025-03-17T17:41:36.0330824Z * [new branch] gh/int3/55/base -> origin/gh/int3/55/base 2025-03-17T17:41:36.0331771Z * [new branch] gh/int3/55/head -> origin/gh/int3/55/head 2025-03-17T17:41:36.0332781Z * [new branch] gh/int3/55/orig -> origin/gh/int3/55/orig 2025-03-17T17:41:36.0334251Z * [new branch] gh/int3/79/base -> origin/gh/int3/79/base 2025-03-17T17:41:36.0335238Z * [new branch] gh/int3/79/head -> origin/gh/int3/79/head 2025-03-17T17:41:36.0336214Z * [new branch] gh/int3/79/orig -> origin/gh/int3/79/orig 2025-03-17T17:41:36.0337952Z * [new branch] gh/int3/94/base -> origin/gh/int3/94/base 2025-03-17T17:41:36.0338909Z * [new branch] gh/int3/94/head -> origin/gh/int3/94/head 2025-03-17T17:41:36.0339839Z * [new branch] gh/int3/94/orig -> origin/gh/int3/94/orig 2025-03-17T17:41:36.0341216Z * [new branch] gh/int3/95/base -> origin/gh/int3/95/base 2025-03-17T17:41:36.0342148Z * [new branch] gh/int3/95/head -> origin/gh/int3/95/head 2025-03-17T17:41:36.0343192Z * [new branch] gh/int3/95/orig -> origin/gh/int3/95/orig 2025-03-17T17:41:36.0344640Z * [new branch] gh/int3/97/base -> origin/gh/int3/97/base 2025-03-17T17:41:36.0345599Z * [new branch] gh/int3/97/head -> origin/gh/int3/97/head 2025-03-17T17:41:36.0347334Z * [new branch] gh/isuruf/101/base -> origin/gh/isuruf/101/base 2025-03-17T17:41:36.0348243Z * [new branch] gh/isuruf/101/head -> origin/gh/isuruf/101/head 2025-03-17T17:41:36.0349566Z * [new branch] gh/isuruf/105/base -> origin/gh/isuruf/105/base 2025-03-17T17:41:36.0350613Z * [new branch] gh/isuruf/105/head -> origin/gh/isuruf/105/head 2025-03-17T17:41:36.0351480Z * [new branch] gh/isuruf/105/orig -> origin/gh/isuruf/105/orig 2025-03-17T17:41:36.0352811Z * [new branch] gh/isuruf/110/base -> origin/gh/isuruf/110/base 2025-03-17T17:41:36.0353676Z * [new branch] gh/isuruf/110/head -> origin/gh/isuruf/110/head 2025-03-17T17:41:36.0354647Z * [new branch] gh/isuruf/110/orig -> origin/gh/isuruf/110/orig 2025-03-17T17:41:36.0356029Z * [new branch] gh/isuruf/112/base -> origin/gh/isuruf/112/base 2025-03-17T17:41:36.0356946Z * [new branch] gh/isuruf/112/head -> origin/gh/isuruf/112/head 2025-03-17T17:41:36.0357900Z * [new branch] gh/isuruf/112/orig -> origin/gh/isuruf/112/orig 2025-03-17T17:41:36.0359319Z * [new branch] gh/isuruf/115/base -> origin/gh/isuruf/115/base 2025-03-17T17:41:36.0360232Z * [new branch] gh/isuruf/115/head -> origin/gh/isuruf/115/head 2025-03-17T17:41:36.0361181Z * [new branch] gh/isuruf/115/orig -> origin/gh/isuruf/115/orig 2025-03-17T17:41:36.0362519Z * [new branch] gh/isuruf/116/base -> origin/gh/isuruf/116/base 2025-03-17T17:41:36.0363450Z * [new branch] gh/isuruf/116/head -> origin/gh/isuruf/116/head 2025-03-17T17:41:36.0364439Z * [new branch] gh/isuruf/116/orig -> origin/gh/isuruf/116/orig 2025-03-17T17:41:36.0365786Z * [new branch] gh/isuruf/117/base -> origin/gh/isuruf/117/base 2025-03-17T17:41:36.0367324Z * [new branch] gh/isuruf/117/head -> origin/gh/isuruf/117/head 2025-03-17T17:41:36.0368230Z * [new branch] gh/isuruf/117/orig -> origin/gh/isuruf/117/orig 2025-03-17T17:41:36.0369599Z * [new branch] gh/isuruf/119/base -> origin/gh/isuruf/119/base 2025-03-17T17:41:36.0370517Z * [new branch] gh/isuruf/119/head -> origin/gh/isuruf/119/head 2025-03-17T17:41:36.0371470Z * [new branch] gh/isuruf/119/orig -> origin/gh/isuruf/119/orig 2025-03-17T17:41:36.0372880Z * [new branch] gh/isuruf/120/base -> origin/gh/isuruf/120/base 2025-03-17T17:41:36.0373769Z * [new branch] gh/isuruf/120/head -> origin/gh/isuruf/120/head 2025-03-17T17:41:36.0374753Z * [new branch] gh/isuruf/120/orig -> origin/gh/isuruf/120/orig 2025-03-17T17:41:36.0376079Z * [new branch] gh/isuruf/121/base -> origin/gh/isuruf/121/base 2025-03-17T17:41:36.0376965Z * [new branch] gh/isuruf/121/head -> origin/gh/isuruf/121/head 2025-03-17T17:41:36.0377987Z * [new branch] gh/isuruf/121/orig -> origin/gh/isuruf/121/orig 2025-03-17T17:41:36.0379290Z * [new branch] gh/isuruf/122/base -> origin/gh/isuruf/122/base 2025-03-17T17:41:36.0380184Z * [new branch] gh/isuruf/122/head -> origin/gh/isuruf/122/head 2025-03-17T17:41:36.0381159Z * [new branch] gh/isuruf/122/orig -> origin/gh/isuruf/122/orig 2025-03-17T17:41:36.0382476Z * [new branch] gh/isuruf/123/base -> origin/gh/isuruf/123/base 2025-03-17T17:41:36.0383354Z * [new branch] gh/isuruf/123/head -> origin/gh/isuruf/123/head 2025-03-17T17:41:36.0384332Z * [new branch] gh/isuruf/123/orig -> origin/gh/isuruf/123/orig 2025-03-17T17:41:36.0385687Z * [new branch] gh/isuruf/124/base -> origin/gh/isuruf/124/base 2025-03-17T17:41:36.0386647Z * [new branch] gh/isuruf/124/head -> origin/gh/isuruf/124/head 2025-03-17T17:41:36.0387692Z * [new branch] gh/isuruf/124/orig -> origin/gh/isuruf/124/orig 2025-03-17T17:41:36.0389045Z * [new branch] gh/isuruf/125/base -> origin/gh/isuruf/125/base 2025-03-17T17:41:36.0390024Z * [new branch] gh/isuruf/125/head -> origin/gh/isuruf/125/head 2025-03-17T17:41:36.0390919Z * [new branch] gh/isuruf/125/orig -> origin/gh/isuruf/125/orig 2025-03-17T17:41:36.0392271Z * [new branch] gh/isuruf/126/base -> origin/gh/isuruf/126/base 2025-03-17T17:41:36.0393201Z * [new branch] gh/isuruf/126/head -> origin/gh/isuruf/126/head 2025-03-17T17:41:36.0394232Z * [new branch] gh/isuruf/126/orig -> origin/gh/isuruf/126/orig 2025-03-17T17:41:36.0395472Z * [new branch] gh/isuruf/127/base -> origin/gh/isuruf/127/base 2025-03-17T17:41:36.0396398Z * [new branch] gh/isuruf/127/head -> origin/gh/isuruf/127/head 2025-03-17T17:41:36.0397398Z * [new branch] gh/isuruf/127/orig -> origin/gh/isuruf/127/orig 2025-03-17T17:41:36.0399100Z * [new branch] gh/isuruf/128/base -> origin/gh/isuruf/128/base 2025-03-17T17:41:36.0399992Z * [new branch] gh/isuruf/128/head -> origin/gh/isuruf/128/head 2025-03-17T17:41:36.0400999Z * [new branch] gh/isuruf/128/orig -> origin/gh/isuruf/128/orig 2025-03-17T17:41:36.0402424Z * [new branch] gh/isuruf/129/base -> origin/gh/isuruf/129/base 2025-03-17T17:41:36.0403352Z * [new branch] gh/isuruf/129/head -> origin/gh/isuruf/129/head 2025-03-17T17:41:36.0404540Z * [new branch] gh/isuruf/129/orig -> origin/gh/isuruf/129/orig 2025-03-17T17:41:36.0405923Z * [new branch] gh/isuruf/130/base -> origin/gh/isuruf/130/base 2025-03-17T17:41:36.0406842Z * [new branch] gh/isuruf/130/head -> origin/gh/isuruf/130/head 2025-03-17T17:41:36.0407823Z * [new branch] gh/isuruf/130/orig -> origin/gh/isuruf/130/orig 2025-03-17T17:41:36.0409310Z * [new branch] gh/isuruf/131/base -> origin/gh/isuruf/131/base 2025-03-17T17:41:36.0410344Z * [new branch] gh/isuruf/131/head -> origin/gh/isuruf/131/head 2025-03-17T17:41:36.0411421Z * [new branch] gh/isuruf/131/orig -> origin/gh/isuruf/131/orig 2025-03-17T17:41:36.0412869Z * [new branch] gh/isuruf/132/base -> origin/gh/isuruf/132/base 2025-03-17T17:41:36.0413789Z * [new branch] gh/isuruf/132/head -> origin/gh/isuruf/132/head 2025-03-17T17:41:36.0414798Z * [new branch] gh/isuruf/132/orig -> origin/gh/isuruf/132/orig 2025-03-17T17:41:36.0416111Z * [new branch] gh/isuruf/133/base -> origin/gh/isuruf/133/base 2025-03-17T17:41:36.0433340Z * [new branch] gh/isuruf/133/head -> origin/gh/isuruf/133/head 2025-03-17T17:41:36.0433955Z * [new branch] gh/isuruf/133/orig -> origin/gh/isuruf/133/orig 2025-03-17T17:41:36.0434570Z * [new branch] gh/isuruf/39/base -> origin/gh/isuruf/39/base 2025-03-17T17:41:36.0435280Z * [new branch] gh/isuruf/39/head -> origin/gh/isuruf/39/head 2025-03-17T17:41:36.0435839Z * [new branch] gh/isuruf/39/orig -> origin/gh/isuruf/39/orig 2025-03-17T17:41:36.0436417Z * [new branch] gh/isuruf/81/base -> origin/gh/isuruf/81/base 2025-03-17T17:41:36.0437188Z * [new branch] gh/isuruf/81/head -> origin/gh/isuruf/81/head 2025-03-17T17:41:36.0437751Z * [new branch] gh/isuruf/81/orig -> origin/gh/isuruf/81/orig 2025-03-17T17:41:36.0438333Z * [new branch] gh/jamesjwu/100/base -> origin/gh/jamesjwu/100/base 2025-03-17T17:41:36.0438925Z * [new branch] gh/jamesjwu/100/head -> origin/gh/jamesjwu/100/head 2025-03-17T17:41:36.0439515Z * [new branch] gh/jamesjwu/100/orig -> origin/gh/jamesjwu/100/orig 2025-03-17T17:41:36.0440244Z * [new branch] gh/jamesjwu/102/base -> origin/gh/jamesjwu/102/base 2025-03-17T17:41:36.0440838Z * [new branch] gh/jamesjwu/102/head -> origin/gh/jamesjwu/102/head 2025-03-17T17:41:36.0441500Z * [new branch] gh/jamesjwu/105/base -> origin/gh/jamesjwu/105/base 2025-03-17T17:41:36.0442092Z * [new branch] gh/jamesjwu/105/head -> origin/gh/jamesjwu/105/head 2025-03-17T17:41:36.0442798Z * [new branch] gh/jamesjwu/105/orig -> origin/gh/jamesjwu/105/orig 2025-03-17T17:41:36.0443390Z * [new branch] gh/jamesjwu/108/base -> origin/gh/jamesjwu/108/base 2025-03-17T17:41:36.0443979Z * [new branch] gh/jamesjwu/108/head -> origin/gh/jamesjwu/108/head 2025-03-17T17:41:36.0444692Z * [new branch] gh/jamesjwu/108/orig -> origin/gh/jamesjwu/108/orig 2025-03-17T17:41:36.0445283Z * [new branch] gh/jamesjwu/109/base -> origin/gh/jamesjwu/109/base 2025-03-17T17:41:36.0446009Z * [new branch] gh/jamesjwu/109/head -> origin/gh/jamesjwu/109/head 2025-03-17T17:41:36.0447408Z * [new branch] gh/jamesjwu/109/orig -> origin/gh/jamesjwu/109/orig 2025-03-17T17:41:36.0449174Z * [new branch] gh/jamesjwu/110/base -> origin/gh/jamesjwu/110/base 2025-03-17T17:41:36.0450669Z * [new branch] gh/jamesjwu/110/head -> origin/gh/jamesjwu/110/head 2025-03-17T17:41:36.0452027Z * [new branch] gh/jamesjwu/110/orig -> origin/gh/jamesjwu/110/orig 2025-03-17T17:41:36.0453725Z * [new branch] gh/jamesjwu/111/base -> origin/gh/jamesjwu/111/base 2025-03-17T17:41:36.0455067Z * [new branch] gh/jamesjwu/111/head -> origin/gh/jamesjwu/111/head 2025-03-17T17:41:36.0456412Z * [new branch] gh/jamesjwu/111/orig -> origin/gh/jamesjwu/111/orig 2025-03-17T17:41:36.0457920Z * [new branch] gh/jamesjwu/112/base -> origin/gh/jamesjwu/112/base 2025-03-17T17:41:36.0459139Z * [new branch] gh/jamesjwu/112/head -> origin/gh/jamesjwu/112/head 2025-03-17T17:41:36.0460474Z * [new branch] gh/jamesjwu/112/orig -> origin/gh/jamesjwu/112/orig 2025-03-17T17:41:36.0461933Z * [new branch] gh/jamesjwu/113/base -> origin/gh/jamesjwu/113/base 2025-03-17T17:41:36.0463106Z * [new branch] gh/jamesjwu/113/head -> origin/gh/jamesjwu/113/head 2025-03-17T17:41:36.0464325Z * [new branch] gh/jamesjwu/113/orig -> origin/gh/jamesjwu/113/orig 2025-03-17T17:41:36.0466042Z * [new branch] gh/jamesjwu/114/base -> origin/gh/jamesjwu/114/base 2025-03-17T17:41:36.0467479Z * [new branch] gh/jamesjwu/114/head -> origin/gh/jamesjwu/114/head 2025-03-17T17:41:36.0468804Z * [new branch] gh/jamesjwu/114/orig -> origin/gh/jamesjwu/114/orig 2025-03-17T17:41:36.0470627Z * [new branch] gh/jamesjwu/115/base -> origin/gh/jamesjwu/115/base 2025-03-17T17:41:36.0472017Z * [new branch] gh/jamesjwu/115/head -> origin/gh/jamesjwu/115/head 2025-03-17T17:41:36.0473554Z * [new branch] gh/jamesjwu/115/orig -> origin/gh/jamesjwu/115/orig 2025-03-17T17:41:36.0475162Z * [new branch] gh/jamesjwu/116/base -> origin/gh/jamesjwu/116/base 2025-03-17T17:41:36.0476489Z * [new branch] gh/jamesjwu/116/head -> origin/gh/jamesjwu/116/head 2025-03-17T17:41:36.0478062Z * [new branch] gh/jamesjwu/116/orig -> origin/gh/jamesjwu/116/orig 2025-03-17T17:41:36.0479797Z * [new branch] gh/jamesjwu/117/base -> origin/gh/jamesjwu/117/base 2025-03-17T17:41:36.0481138Z * [new branch] gh/jamesjwu/117/head -> origin/gh/jamesjwu/117/head 2025-03-17T17:41:36.0482948Z * [new branch] gh/jamesjwu/117/orig -> origin/gh/jamesjwu/117/orig 2025-03-17T17:41:36.0484709Z * [new branch] gh/jamesjwu/118/base -> origin/gh/jamesjwu/118/base 2025-03-17T17:41:36.0485823Z * [new branch] gh/jamesjwu/118/head -> origin/gh/jamesjwu/118/head 2025-03-17T17:41:36.0487216Z * [new branch] gh/jamesjwu/118/orig -> origin/gh/jamesjwu/118/orig 2025-03-17T17:41:36.0488874Z * [new branch] gh/jamesjwu/119/base -> origin/gh/jamesjwu/119/base 2025-03-17T17:41:36.0490203Z * [new branch] gh/jamesjwu/119/head -> origin/gh/jamesjwu/119/head 2025-03-17T17:41:36.0491989Z * [new branch] gh/jamesjwu/119/orig -> origin/gh/jamesjwu/119/orig 2025-03-17T17:41:36.0493988Z * [new branch] gh/jamesjwu/120/base -> origin/gh/jamesjwu/120/base 2025-03-17T17:41:36.0495309Z * [new branch] gh/jamesjwu/120/head -> origin/gh/jamesjwu/120/head 2025-03-17T17:41:36.0496613Z * [new branch] gh/jamesjwu/120/orig -> origin/gh/jamesjwu/120/orig 2025-03-17T17:41:36.0498638Z * [new branch] gh/jamesjwu/121/base -> origin/gh/jamesjwu/121/base 2025-03-17T17:41:36.0499947Z * [new branch] gh/jamesjwu/121/head -> origin/gh/jamesjwu/121/head 2025-03-17T17:41:36.0501213Z * [new branch] gh/jamesjwu/121/orig -> origin/gh/jamesjwu/121/orig 2025-03-17T17:41:36.0503011Z * [new branch] gh/jamesjwu/52/base -> origin/gh/jamesjwu/52/base 2025-03-17T17:41:36.0504358Z * [new branch] gh/jamesjwu/52/head -> origin/gh/jamesjwu/52/head 2025-03-17T17:41:36.0505903Z * [new branch] gh/jamesjwu/53/base -> origin/gh/jamesjwu/53/base 2025-03-17T17:41:36.0507724Z * [new branch] gh/jamesjwu/53/head -> origin/gh/jamesjwu/53/head 2025-03-17T17:41:36.0509292Z * [new branch] gh/jamesjwu/54/base -> origin/gh/jamesjwu/54/base 2025-03-17T17:41:36.0510481Z * [new branch] gh/jamesjwu/54/head -> origin/gh/jamesjwu/54/head 2025-03-17T17:41:36.0512126Z * [new branch] gh/jamesjwu/55/base -> origin/gh/jamesjwu/55/base 2025-03-17T17:41:36.0513351Z * [new branch] gh/jamesjwu/55/head -> origin/gh/jamesjwu/55/head 2025-03-17T17:41:36.0514905Z * [new branch] gh/jamesjwu/56/base -> origin/gh/jamesjwu/56/base 2025-03-17T17:41:36.0516051Z * [new branch] gh/jamesjwu/56/head -> origin/gh/jamesjwu/56/head 2025-03-17T17:41:36.0518007Z * [new branch] gh/jamesjwu/57/base -> origin/gh/jamesjwu/57/base 2025-03-17T17:41:36.0519295Z * [new branch] gh/jamesjwu/57/head -> origin/gh/jamesjwu/57/head 2025-03-17T17:41:36.0520879Z * [new branch] gh/jamesjwu/58/base -> origin/gh/jamesjwu/58/base 2025-03-17T17:41:36.0521981Z * [new branch] gh/jamesjwu/58/head -> origin/gh/jamesjwu/58/head 2025-03-17T17:41:36.0523559Z * [new branch] gh/jamesjwu/59/base -> origin/gh/jamesjwu/59/base 2025-03-17T17:41:36.0524888Z * [new branch] gh/jamesjwu/59/head -> origin/gh/jamesjwu/59/head 2025-03-17T17:41:36.0526397Z * [new branch] gh/jamesjwu/60/base -> origin/gh/jamesjwu/60/base 2025-03-17T17:41:36.0527588Z * [new branch] gh/jamesjwu/60/head -> origin/gh/jamesjwu/60/head 2025-03-17T17:41:36.0529149Z * [new branch] gh/jamesjwu/61/base -> origin/gh/jamesjwu/61/base 2025-03-17T17:41:36.0530945Z * [new branch] gh/jamesjwu/61/head -> origin/gh/jamesjwu/61/head 2025-03-17T17:41:36.0532238Z * [new branch] gh/jamesjwu/62/base -> origin/gh/jamesjwu/62/base 2025-03-17T17:41:36.0533515Z * [new branch] gh/jamesjwu/62/head -> origin/gh/jamesjwu/62/head 2025-03-17T17:41:36.0535366Z * [new branch] gh/jamesjwu/63/base -> origin/gh/jamesjwu/63/base 2025-03-17T17:41:36.0537209Z * [new branch] gh/jamesjwu/63/head -> origin/gh/jamesjwu/63/head 2025-03-17T17:41:36.0539333Z * [new branch] gh/jamesjwu/64/base -> origin/gh/jamesjwu/64/base 2025-03-17T17:41:36.0540629Z * [new branch] gh/jamesjwu/64/head -> origin/gh/jamesjwu/64/head 2025-03-17T17:41:36.0542543Z * [new branch] gh/jamesjwu/65/base -> origin/gh/jamesjwu/65/base 2025-03-17T17:41:36.0543860Z * [new branch] gh/jamesjwu/65/head -> origin/gh/jamesjwu/65/head 2025-03-17T17:41:36.0546119Z * [new branch] gh/jamesjwu/97/base -> origin/gh/jamesjwu/97/base 2025-03-17T17:41:36.0547463Z * [new branch] gh/jamesjwu/97/head -> origin/gh/jamesjwu/97/head 2025-03-17T17:41:36.0548916Z * [new branch] gh/jamesjwu/97/orig -> origin/gh/jamesjwu/97/orig 2025-03-17T17:41:36.0551603Z * [new branch] gh/janeyx99/165/base -> origin/gh/janeyx99/165/base 2025-03-17T17:41:36.0552697Z * [new branch] gh/janeyx99/165/head -> origin/gh/janeyx99/165/head 2025-03-17T17:41:36.0554018Z * [new branch] gh/janeyx99/165/orig -> origin/gh/janeyx99/165/orig 2025-03-17T17:41:36.0555524Z * [new branch] gh/janeyx99/201/base -> origin/gh/janeyx99/201/base 2025-03-17T17:41:36.0556852Z * [new branch] gh/janeyx99/201/head -> origin/gh/janeyx99/201/head 2025-03-17T17:41:36.0558151Z * [new branch] gh/janeyx99/201/orig -> origin/gh/janeyx99/201/orig 2025-03-17T17:41:36.0559640Z * [new branch] gh/janeyx99/221/base -> origin/gh/janeyx99/221/base 2025-03-17T17:41:36.0560931Z * [new branch] gh/janeyx99/221/head -> origin/gh/janeyx99/221/head 2025-03-17T17:41:36.0562221Z * [new branch] gh/janeyx99/221/orig -> origin/gh/janeyx99/221/orig 2025-03-17T17:41:36.0564049Z * [new branch] gh/janeyx99/222/base -> origin/gh/janeyx99/222/base 2025-03-17T17:41:36.0565960Z * [new branch] gh/janeyx99/222/head -> origin/gh/janeyx99/222/head 2025-03-17T17:41:36.0567062Z * [new branch] gh/janeyx99/222/orig -> origin/gh/janeyx99/222/orig 2025-03-17T17:41:36.0569191Z * [new branch] gh/janeyx99/223/base -> origin/gh/janeyx99/223/base 2025-03-17T17:41:36.0570555Z * [new branch] gh/janeyx99/223/head -> origin/gh/janeyx99/223/head 2025-03-17T17:41:36.0572028Z * [new branch] gh/janeyx99/223/orig -> origin/gh/janeyx99/223/orig 2025-03-17T17:41:36.0574336Z * [new branch] gh/janeyx99/224/base -> origin/gh/janeyx99/224/base 2025-03-17T17:41:36.0575587Z * [new branch] gh/janeyx99/224/head -> origin/gh/janeyx99/224/head 2025-03-17T17:41:36.0577057Z * [new branch] gh/janeyx99/224/orig -> origin/gh/janeyx99/224/orig 2025-03-17T17:41:36.0578892Z * [new branch] gh/janeyx99/225/base -> origin/gh/janeyx99/225/base 2025-03-17T17:41:36.0580409Z * [new branch] gh/janeyx99/225/head -> origin/gh/janeyx99/225/head 2025-03-17T17:41:36.0581851Z * [new branch] gh/janeyx99/225/orig -> origin/gh/janeyx99/225/orig 2025-03-17T17:41:36.0584257Z * [new branch] gh/janeyx99/226/base -> origin/gh/janeyx99/226/base 2025-03-17T17:41:36.0585853Z * [new branch] gh/janeyx99/226/head -> origin/gh/janeyx99/226/head 2025-03-17T17:41:36.0587567Z * [new branch] gh/janeyx99/226/orig -> origin/gh/janeyx99/226/orig 2025-03-17T17:41:36.0589799Z * [new branch] gh/janeyx99/227/base -> origin/gh/janeyx99/227/base 2025-03-17T17:41:36.0591071Z * [new branch] gh/janeyx99/227/head -> origin/gh/janeyx99/227/head 2025-03-17T17:41:36.0592609Z * [new branch] gh/janeyx99/227/orig -> origin/gh/janeyx99/227/orig 2025-03-17T17:41:36.0594990Z * [new branch] gh/janeyx99/228/base -> origin/gh/janeyx99/228/base 2025-03-17T17:41:36.0596344Z * [new branch] gh/janeyx99/228/head -> origin/gh/janeyx99/228/head 2025-03-17T17:41:36.0597792Z * [new branch] gh/janeyx99/228/orig -> origin/gh/janeyx99/228/orig 2025-03-17T17:41:36.0599597Z * [new branch] gh/janeyx99/229/base -> origin/gh/janeyx99/229/base 2025-03-17T17:41:36.0601129Z * [new branch] gh/janeyx99/229/head -> origin/gh/janeyx99/229/head 2025-03-17T17:41:36.0602625Z * [new branch] gh/janeyx99/229/orig -> origin/gh/janeyx99/229/orig 2025-03-17T17:41:36.0604784Z * [new branch] gh/janeyx99/230/base -> origin/gh/janeyx99/230/base 2025-03-17T17:41:36.0606091Z * [new branch] gh/janeyx99/230/head -> origin/gh/janeyx99/230/head 2025-03-17T17:41:36.0607532Z * [new branch] gh/janeyx99/230/orig -> origin/gh/janeyx99/230/orig 2025-03-17T17:41:36.0609865Z * [new branch] gh/janeyx99/231/base -> origin/gh/janeyx99/231/base 2025-03-17T17:41:36.0611082Z * [new branch] gh/janeyx99/231/head -> origin/gh/janeyx99/231/head 2025-03-17T17:41:36.0612587Z * [new branch] gh/janeyx99/231/orig -> origin/gh/janeyx99/231/orig 2025-03-17T17:41:36.0614422Z * [new branch] gh/janeyx99/232/base -> origin/gh/janeyx99/232/base 2025-03-17T17:41:36.0615919Z * [new branch] gh/janeyx99/232/head -> origin/gh/janeyx99/232/head 2025-03-17T17:41:36.0617340Z * [new branch] gh/janeyx99/232/orig -> origin/gh/janeyx99/232/orig 2025-03-17T17:41:36.0619138Z * [new branch] gh/janeyx99/233/base -> origin/gh/janeyx99/233/base 2025-03-17T17:41:36.0620558Z * [new branch] gh/janeyx99/233/head -> origin/gh/janeyx99/233/head 2025-03-17T17:41:36.0622057Z * [new branch] gh/janeyx99/233/orig -> origin/gh/janeyx99/233/orig 2025-03-17T17:41:36.0624062Z * [new branch] gh/janeyx99/234/base -> origin/gh/janeyx99/234/base 2025-03-17T17:41:36.0625924Z * [new branch] gh/janeyx99/234/head -> origin/gh/janeyx99/234/head 2025-03-17T17:41:36.0627217Z * [new branch] gh/janeyx99/234/orig -> origin/gh/janeyx99/234/orig 2025-03-17T17:41:36.0629450Z * [new branch] gh/janeyx99/88/base -> origin/gh/janeyx99/88/base 2025-03-17T17:41:36.0630720Z * [new branch] gh/janeyx99/88/head -> origin/gh/janeyx99/88/head 2025-03-17T17:41:36.0632174Z * [new branch] gh/janeyx99/88/orig -> origin/gh/janeyx99/88/orig 2025-03-17T17:41:36.0635452Z * [new branch] gh/jansel/227/base -> origin/gh/jansel/227/base 2025-03-17T17:41:36.0636552Z * [new branch] gh/jansel/227/head -> origin/gh/jansel/227/head 2025-03-17T17:41:36.0638326Z * [new branch] gh/jansel/227/orig -> origin/gh/jansel/227/orig 2025-03-17T17:41:36.0640531Z * [new branch] gh/jansel/360/base -> origin/gh/jansel/360/base 2025-03-17T17:41:36.0641784Z * [new branch] gh/jansel/360/head -> origin/gh/jansel/360/head 2025-03-17T17:41:36.0643956Z * [new branch] gh/jansel/451/base -> origin/gh/jansel/451/base 2025-03-17T17:41:36.0645301Z * [new branch] gh/jansel/451/head -> origin/gh/jansel/451/head 2025-03-17T17:41:36.0646760Z * [new branch] gh/jansel/451/orig -> origin/gh/jansel/451/orig 2025-03-17T17:41:36.0648870Z * [new branch] gh/jansel/462/base -> origin/gh/jansel/462/base 2025-03-17T17:41:36.0650195Z * [new branch] gh/jansel/462/head -> origin/gh/jansel/462/head 2025-03-17T17:41:36.0651633Z * [new branch] gh/jansel/462/orig -> origin/gh/jansel/462/orig 2025-03-17T17:41:36.0653799Z * [new branch] gh/jansel/473/base -> origin/gh/jansel/473/base 2025-03-17T17:41:36.0655243Z * [new branch] gh/jansel/473/head -> origin/gh/jansel/473/head 2025-03-17T17:41:36.0656516Z * [new branch] gh/jansel/473/orig -> origin/gh/jansel/473/orig 2025-03-17T17:41:36.0658639Z * [new branch] gh/jansel/486/base -> origin/gh/jansel/486/base 2025-03-17T17:41:36.0659948Z * [new branch] gh/jansel/486/head -> origin/gh/jansel/486/head 2025-03-17T17:41:36.0661370Z * [new branch] gh/jansel/486/orig -> origin/gh/jansel/486/orig 2025-03-17T17:41:36.0663495Z * [new branch] gh/jansel/505/base -> origin/gh/jansel/505/base 2025-03-17T17:41:36.0664789Z * [new branch] gh/jansel/505/head -> origin/gh/jansel/505/head 2025-03-17T17:41:36.0666484Z * [new branch] gh/jansel/505/orig -> origin/gh/jansel/505/orig 2025-03-17T17:41:36.0668635Z * [new branch] gh/jansel/506/base -> origin/gh/jansel/506/base 2025-03-17T17:41:36.0669952Z * [new branch] gh/jansel/506/head -> origin/gh/jansel/506/head 2025-03-17T17:41:36.0671386Z * [new branch] gh/jansel/506/orig -> origin/gh/jansel/506/orig 2025-03-17T17:41:36.0673541Z * [new branch] gh/jansel/507/base -> origin/gh/jansel/507/base 2025-03-17T17:41:36.0674830Z * [new branch] gh/jansel/507/head -> origin/gh/jansel/507/head 2025-03-17T17:41:36.0676265Z * [new branch] gh/jansel/507/orig -> origin/gh/jansel/507/orig 2025-03-17T17:41:36.0678559Z * [new branch] gh/jansel/508/base -> origin/gh/jansel/508/base 2025-03-17T17:41:36.0679845Z * [new branch] gh/jansel/508/head -> origin/gh/jansel/508/head 2025-03-17T17:41:36.0681273Z * [new branch] gh/jansel/508/orig -> origin/gh/jansel/508/orig 2025-03-17T17:41:36.0683511Z * [new branch] gh/jansel/509/base -> origin/gh/jansel/509/base 2025-03-17T17:41:36.0684764Z * [new branch] gh/jansel/509/head -> origin/gh/jansel/509/head 2025-03-17T17:41:36.0686224Z * [new branch] gh/jansel/509/orig -> origin/gh/jansel/509/orig 2025-03-17T17:41:36.0688895Z * [new branch] gh/jansel/510/base -> origin/gh/jansel/510/base 2025-03-17T17:41:36.0690134Z * [new branch] gh/jansel/510/head -> origin/gh/jansel/510/head 2025-03-17T17:41:36.0691663Z * [new branch] gh/jansel/510/orig -> origin/gh/jansel/510/orig 2025-03-17T17:41:36.0693534Z * [new branch] gh/jansel/511/base -> origin/gh/jansel/511/base 2025-03-17T17:41:36.0694998Z * [new branch] gh/jansel/511/head -> origin/gh/jansel/511/head 2025-03-17T17:41:36.0696433Z * [new branch] gh/jansel/511/orig -> origin/gh/jansel/511/orig 2025-03-17T17:41:36.0698642Z * [new branch] gh/jansel/512/base -> origin/gh/jansel/512/base 2025-03-17T17:41:36.0700007Z * [new branch] gh/jansel/512/head -> origin/gh/jansel/512/head 2025-03-17T17:41:36.0701517Z * [new branch] gh/jansel/512/orig -> origin/gh/jansel/512/orig 2025-03-17T17:41:36.0703638Z * [new branch] gh/jansel/513/base -> origin/gh/jansel/513/base 2025-03-17T17:41:36.0704901Z * [new branch] gh/jansel/513/head -> origin/gh/jansel/513/head 2025-03-17T17:41:36.0706464Z * [new branch] gh/jansel/513/orig -> origin/gh/jansel/513/orig 2025-03-17T17:41:36.0708672Z * [new branch] gh/jansel/514/base -> origin/gh/jansel/514/base 2025-03-17T17:41:36.0709906Z * [new branch] gh/jansel/514/head -> origin/gh/jansel/514/head 2025-03-17T17:41:36.0711354Z * [new branch] gh/jansel/514/orig -> origin/gh/jansel/514/orig 2025-03-17T17:41:36.0714126Z * [new branch] gh/jansel/515/base -> origin/gh/jansel/515/base 2025-03-17T17:41:36.0715237Z * [new branch] gh/jansel/515/head -> origin/gh/jansel/515/head 2025-03-17T17:41:36.0716676Z * [new branch] gh/jansel/515/orig -> origin/gh/jansel/515/orig 2025-03-17T17:41:36.0719059Z * [new branch] gh/jansel/516/base -> origin/gh/jansel/516/base 2025-03-17T17:41:36.0720229Z * [new branch] gh/jansel/516/head -> origin/gh/jansel/516/head 2025-03-17T17:41:36.0721735Z * [new branch] gh/jansel/516/orig -> origin/gh/jansel/516/orig 2025-03-17T17:41:36.0723949Z * [new branch] gh/jansel/517/base -> origin/gh/jansel/517/base 2025-03-17T17:41:36.0725252Z * [new branch] gh/jansel/517/head -> origin/gh/jansel/517/head 2025-03-17T17:41:36.0726705Z * [new branch] gh/jansel/517/orig -> origin/gh/jansel/517/orig 2025-03-17T17:41:36.0729082Z * [new branch] gh/jansel/518/base -> origin/gh/jansel/518/base 2025-03-17T17:41:36.0730301Z * [new branch] gh/jansel/518/head -> origin/gh/jansel/518/head 2025-03-17T17:41:36.0731741Z * [new branch] gh/jansel/518/orig -> origin/gh/jansel/518/orig 2025-03-17T17:41:36.0733899Z * [new branch] gh/jansel/519/base -> origin/gh/jansel/519/base 2025-03-17T17:41:36.0735233Z * [new branch] gh/jansel/519/head -> origin/gh/jansel/519/head 2025-03-17T17:41:36.0736664Z * [new branch] gh/jansel/519/orig -> origin/gh/jansel/519/orig 2025-03-17T17:41:36.0738827Z * [new branch] gh/jansel/520/base -> origin/gh/jansel/520/base 2025-03-17T17:41:36.0739572Z * [new branch] gh/jansel/520/head -> origin/gh/jansel/520/head 2025-03-17T17:41:36.0740531Z * [new branch] gh/jansel/520/orig -> origin/gh/jansel/520/orig 2025-03-17T17:41:36.0741831Z * [new branch] gh/jansel/521/base -> origin/gh/jansel/521/base 2025-03-17T17:41:36.0742777Z * [new branch] gh/jansel/521/head -> origin/gh/jansel/521/head 2025-03-17T17:41:36.0743756Z * [new branch] gh/jansel/521/orig -> origin/gh/jansel/521/orig 2025-03-17T17:41:36.0745469Z * [new branch] gh/jbschlosser/195/base -> origin/gh/jbschlosser/195/base 2025-03-17T17:41:36.0746496Z * [new branch] gh/jbschlosser/195/head -> origin/gh/jbschlosser/195/head 2025-03-17T17:41:36.0747484Z * [new branch] gh/jbschlosser/195/orig -> origin/gh/jbschlosser/195/orig 2025-03-17T17:41:36.0749011Z * [new branch] gh/jbschlosser/208/base -> origin/gh/jbschlosser/208/base 2025-03-17T17:41:36.0749873Z * [new branch] gh/jbschlosser/208/head -> origin/gh/jbschlosser/208/head 2025-03-17T17:41:36.0750831Z * [new branch] gh/jbschlosser/208/orig -> origin/gh/jbschlosser/208/orig 2025-03-17T17:41:36.0752417Z * [new branch] gh/jbschlosser/214/base -> origin/gh/jbschlosser/214/base 2025-03-17T17:41:36.0753275Z * [new branch] gh/jbschlosser/214/head -> origin/gh/jbschlosser/214/head 2025-03-17T17:41:36.0754220Z * [new branch] gh/jbschlosser/214/orig -> origin/gh/jbschlosser/214/orig 2025-03-17T17:41:36.0755717Z * [new branch] gh/jbschlosser/226/base -> origin/gh/jbschlosser/226/base 2025-03-17T17:41:36.0756606Z * [new branch] gh/jbschlosser/226/head -> origin/gh/jbschlosser/226/head 2025-03-17T17:41:36.0757566Z * [new branch] gh/jbschlosser/226/orig -> origin/gh/jbschlosser/226/orig 2025-03-17T17:41:36.0758909Z * [new branch] gh/jbschlosser/227/base -> origin/gh/jbschlosser/227/base 2025-03-17T17:41:36.0759805Z * [new branch] gh/jbschlosser/227/head -> origin/gh/jbschlosser/227/head 2025-03-17T17:41:36.0760908Z * [new branch] gh/jbschlosser/227/orig -> origin/gh/jbschlosser/227/orig 2025-03-17T17:41:36.0762199Z * [new branch] gh/jbschlosser/228/base -> origin/gh/jbschlosser/228/base 2025-03-17T17:41:36.0763170Z * [new branch] gh/jbschlosser/228/head -> origin/gh/jbschlosser/228/head 2025-03-17T17:41:36.0764219Z * [new branch] gh/jbschlosser/228/orig -> origin/gh/jbschlosser/228/orig 2025-03-17T17:41:36.0765728Z * [new branch] gh/jbschlosser/229/base -> origin/gh/jbschlosser/229/base 2025-03-17T17:41:36.0766560Z * [new branch] gh/jbschlosser/229/head -> origin/gh/jbschlosser/229/head 2025-03-17T17:41:36.0767557Z * [new branch] gh/jbschlosser/229/orig -> origin/gh/jbschlosser/229/orig 2025-03-17T17:41:36.0769022Z * [new branch] gh/jbschlosser/230/base -> origin/gh/jbschlosser/230/base 2025-03-17T17:41:36.0770312Z * [new branch] gh/jbschlosser/230/head -> origin/gh/jbschlosser/230/head 2025-03-17T17:41:36.0771258Z * [new branch] gh/jbschlosser/230/orig -> origin/gh/jbschlosser/230/orig 2025-03-17T17:41:36.0772638Z * [new branch] gh/jbschlosser/231/base -> origin/gh/jbschlosser/231/base 2025-03-17T17:41:36.0773625Z * [new branch] gh/jbschlosser/231/head -> origin/gh/jbschlosser/231/head 2025-03-17T17:41:36.0774583Z * [new branch] gh/jbschlosser/231/orig -> origin/gh/jbschlosser/231/orig 2025-03-17T17:41:36.0775989Z * [new branch] gh/jbschlosser/89/base -> origin/gh/jbschlosser/89/base 2025-03-17T17:41:36.0776896Z * [new branch] gh/jbschlosser/89/head -> origin/gh/jbschlosser/89/head 2025-03-17T17:41:36.0777792Z * [new branch] gh/jbschlosser/89/orig -> origin/gh/jbschlosser/89/orig 2025-03-17T17:41:36.0779525Z * [new branch] gh/jcaip/70/base -> origin/gh/jcaip/70/base 2025-03-17T17:41:36.0780482Z * [new branch] gh/jcaip/70/head -> origin/gh/jcaip/70/head 2025-03-17T17:41:36.0781438Z * [new branch] gh/jcaip/70/orig -> origin/gh/jcaip/70/orig 2025-03-17T17:41:36.0783171Z * [new branch] gh/jerryzh168/855/base -> origin/gh/jerryzh168/855/base 2025-03-17T17:41:36.0784503Z * [new branch] gh/jerryzh168/855/head -> origin/gh/jerryzh168/855/head 2025-03-17T17:41:36.0785522Z * [new branch] gh/jerryzh168/855/orig -> origin/gh/jerryzh168/855/orig 2025-03-17T17:41:36.0787004Z * [new branch] gh/jerryzh168/859/base -> origin/gh/jerryzh168/859/base 2025-03-17T17:41:36.0787964Z * [new branch] gh/jerryzh168/859/head -> origin/gh/jerryzh168/859/head 2025-03-17T17:41:36.0788998Z * [new branch] gh/jerryzh168/859/orig -> origin/gh/jerryzh168/859/orig 2025-03-17T17:41:36.0790246Z * [new branch] gh/jerryzh168/860/base -> origin/gh/jerryzh168/860/base 2025-03-17T17:41:36.0791251Z * [new branch] gh/jerryzh168/860/head -> origin/gh/jerryzh168/860/head 2025-03-17T17:41:36.0792260Z * [new branch] gh/jerryzh168/860/orig -> origin/gh/jerryzh168/860/orig 2025-03-17T17:41:36.0793909Z * [new branch] gh/jgong5/23/base -> origin/gh/jgong5/23/base 2025-03-17T17:41:36.0794829Z * [new branch] gh/jgong5/23/head -> origin/gh/jgong5/23/head 2025-03-17T17:41:36.0796506Z * [new branch] gh/jiayisunx/34/base -> origin/gh/jiayisunx/34/base 2025-03-17T17:41:36.0797471Z * [new branch] gh/jiayisunx/34/head -> origin/gh/jiayisunx/34/head 2025-03-17T17:41:36.0798431Z * [new branch] gh/jiayisunx/34/orig -> origin/gh/jiayisunx/34/orig 2025-03-17T17:41:36.0799734Z * [new branch] gh/jiayisunx/37/base -> origin/gh/jiayisunx/37/base 2025-03-17T17:41:36.0800678Z * [new branch] gh/jiayisunx/37/head -> origin/gh/jiayisunx/37/head 2025-03-17T17:41:36.0801730Z * [new branch] gh/jiayisunx/37/orig -> origin/gh/jiayisunx/37/orig 2025-03-17T17:41:36.0803046Z * [new branch] gh/jiayisunx/50/base -> origin/gh/jiayisunx/50/base 2025-03-17T17:41:36.0803923Z * [new branch] gh/jiayisunx/50/head -> origin/gh/jiayisunx/50/head 2025-03-17T17:41:36.0804865Z * [new branch] gh/jiayisunx/50/orig -> origin/gh/jiayisunx/50/orig 2025-03-17T17:41:36.0806243Z * [new branch] gh/jiayisunx/51/base -> origin/gh/jiayisunx/51/base 2025-03-17T17:41:36.0807192Z * [new branch] gh/jiayisunx/51/head -> origin/gh/jiayisunx/51/head 2025-03-17T17:41:36.0808105Z * [new branch] gh/jiayisunx/51/orig -> origin/gh/jiayisunx/51/orig 2025-03-17T17:41:36.0809498Z * [new branch] gh/jiayisunx/53/base -> origin/gh/jiayisunx/53/base 2025-03-17T17:41:36.0810399Z * [new branch] gh/jiayisunx/53/head -> origin/gh/jiayisunx/53/head 2025-03-17T17:41:36.0811373Z * [new branch] gh/jiayisunx/53/orig -> origin/gh/jiayisunx/53/orig 2025-03-17T17:41:36.0812736Z * [new branch] gh/jiayisunx/54/base -> origin/gh/jiayisunx/54/base 2025-03-17T17:41:36.0813631Z * [new branch] gh/jiayisunx/54/head -> origin/gh/jiayisunx/54/head 2025-03-17T17:41:36.0814606Z * [new branch] gh/jiayisunx/54/orig -> origin/gh/jiayisunx/54/orig 2025-03-17T17:41:36.0815980Z * [new branch] gh/jiayisunx/55/base -> origin/gh/jiayisunx/55/base 2025-03-17T17:41:36.0816821Z * [new branch] gh/jiayisunx/55/head -> origin/gh/jiayisunx/55/head 2025-03-17T17:41:36.0817769Z * [new branch] gh/jiayisunx/55/orig -> origin/gh/jiayisunx/55/orig 2025-03-17T17:41:36.0819164Z * [new branch] gh/jiayisunx/56/base -> origin/gh/jiayisunx/56/base 2025-03-17T17:41:36.0820089Z * [new branch] gh/jiayisunx/56/head -> origin/gh/jiayisunx/56/head 2025-03-17T17:41:36.0821053Z * [new branch] gh/jiayisunx/56/orig -> origin/gh/jiayisunx/56/orig 2025-03-17T17:41:36.0822392Z * [new branch] gh/jiayisunx/57/base -> origin/gh/jiayisunx/57/base 2025-03-17T17:41:36.0823310Z * [new branch] gh/jiayisunx/57/head -> origin/gh/jiayisunx/57/head 2025-03-17T17:41:36.0824313Z * [new branch] gh/jiayisunx/57/orig -> origin/gh/jiayisunx/57/orig 2025-03-17T17:41:36.0825681Z * [new branch] gh/jiayisunx/58/base -> origin/gh/jiayisunx/58/base 2025-03-17T17:41:36.0826676Z * [new branch] gh/jiayisunx/58/head -> origin/gh/jiayisunx/58/head 2025-03-17T17:41:36.0827668Z * [new branch] gh/jiayisunx/58/orig -> origin/gh/jiayisunx/58/orig 2025-03-17T17:41:36.0828981Z * [new branch] gh/jiayisunx/59/base -> origin/gh/jiayisunx/59/base 2025-03-17T17:41:36.0829910Z * [new branch] gh/jiayisunx/59/head -> origin/gh/jiayisunx/59/head 2025-03-17T17:41:36.0830888Z * [new branch] gh/jiayisunx/59/orig -> origin/gh/jiayisunx/59/orig 2025-03-17T17:41:36.0832294Z * [new branch] gh/jiayisunx/60/base -> origin/gh/jiayisunx/60/base 2025-03-17T17:41:36.0833213Z * [new branch] gh/jiayisunx/60/head -> origin/gh/jiayisunx/60/head 2025-03-17T17:41:36.0834134Z * [new branch] gh/jiayisunx/60/orig -> origin/gh/jiayisunx/60/orig 2025-03-17T17:41:36.0835497Z * [new branch] gh/jiayisunx/61/base -> origin/gh/jiayisunx/61/base 2025-03-17T17:41:36.0836445Z * [new branch] gh/jiayisunx/61/head -> origin/gh/jiayisunx/61/head 2025-03-17T17:41:36.0841892Z * [new branch] gh/jiayisunx/61/orig -> origin/gh/jiayisunx/61/orig 2025-03-17T17:41:36.0843459Z * [new branch] gh/jjwu@meta.com/1/base -> origin/gh/jjwu@meta.com/1/base 2025-03-17T17:41:36.0844475Z * [new branch] gh/jjwu@meta.com/1/head -> origin/gh/jjwu@meta.com/1/head 2025-03-17T17:41:36.0846684Z * [new branch] gh/jon-chuang/1/base -> origin/gh/jon-chuang/1/base 2025-03-17T17:41:36.0848260Z * [new branch] gh/jon-chuang/1/head -> origin/gh/jon-chuang/1/head 2025-03-17T17:41:36.0850366Z * [new branch] gh/jon-chuang/12/base -> origin/gh/jon-chuang/12/base 2025-03-17T17:41:36.0852351Z * [new branch] gh/jon-chuang/13/base -> origin/gh/jon-chuang/13/base 2025-03-17T17:41:36.0854373Z * [new branch] gh/jon-chuang/14/base -> origin/gh/jon-chuang/14/base 2025-03-17T17:41:36.0856427Z * [new branch] gh/jon-chuang/16/base -> origin/gh/jon-chuang/16/base 2025-03-17T17:41:36.0858031Z * [new branch] gh/jon-chuang/16/head -> origin/gh/jon-chuang/16/head 2025-03-17T17:41:36.0859561Z * [new branch] gh/jon-chuang/16/orig -> origin/gh/jon-chuang/16/orig 2025-03-17T17:41:36.0861686Z * [new branch] gh/jon-chuang/19/base -> origin/gh/jon-chuang/19/base 2025-03-17T17:41:36.0863241Z * [new branch] gh/jon-chuang/19/head -> origin/gh/jon-chuang/19/head 2025-03-17T17:41:36.0864815Z * [new branch] gh/jon-chuang/19/orig -> origin/gh/jon-chuang/19/orig 2025-03-17T17:41:36.0866714Z * [new branch] gh/jon-chuang/2/base -> origin/gh/jon-chuang/2/base 2025-03-17T17:41:36.0868180Z * [new branch] gh/jon-chuang/2/head -> origin/gh/jon-chuang/2/head 2025-03-17T17:41:36.0869150Z * [new branch] gh/jon-chuang/3/base -> origin/gh/jon-chuang/3/base 2025-03-17T17:41:36.0870072Z * [new branch] gh/jon-chuang/3/head -> origin/gh/jon-chuang/3/head 2025-03-17T17:41:36.0871721Z * [new branch] gh/jon-chuang/4/base -> origin/gh/jon-chuang/4/base 2025-03-17T17:41:36.0872641Z * [new branch] gh/jon-chuang/4/head -> origin/gh/jon-chuang/4/head 2025-03-17T17:41:36.0873798Z * [new branch] gh/jon-chuang/5/base -> origin/gh/jon-chuang/5/base 2025-03-17T17:41:36.0874775Z * [new branch] gh/jon-chuang/5/head -> origin/gh/jon-chuang/5/head 2025-03-17T17:41:36.0876405Z * [new branch] gh/jon-chuang/6/base -> origin/gh/jon-chuang/6/base 2025-03-17T17:41:36.0877371Z * [new branch] gh/jon-chuang/6/head -> origin/gh/jon-chuang/6/head 2025-03-17T17:41:36.0878518Z * [new branch] gh/jon-chuang/7/base -> origin/gh/jon-chuang/7/base 2025-03-17T17:41:36.0879429Z * [new branch] gh/jon-chuang/7/head -> origin/gh/jon-chuang/7/head 2025-03-17T17:41:36.0880589Z * [new branch] gh/jon-chuang/8/base -> origin/gh/jon-chuang/8/base 2025-03-17T17:41:36.0881565Z * [new branch] gh/jon-chuang/8/head -> origin/gh/jon-chuang/8/head 2025-03-17T17:41:36.0883105Z * [new branch] gh/justinchuby/102/base -> origin/gh/justinchuby/102/base 2025-03-17T17:41:36.0884079Z * [new branch] gh/justinchuby/102/head -> origin/gh/justinchuby/102/head 2025-03-17T17:41:36.0885066Z * [new branch] gh/justinchuby/102/orig -> origin/gh/justinchuby/102/orig 2025-03-17T17:41:36.0886386Z * [new branch] gh/justinchuby/103/base -> origin/gh/justinchuby/103/base 2025-03-17T17:41:36.0887390Z * [new branch] gh/justinchuby/103/head -> origin/gh/justinchuby/103/head 2025-03-17T17:41:36.0888352Z * [new branch] gh/justinchuby/103/orig -> origin/gh/justinchuby/103/orig 2025-03-17T17:41:36.0889656Z * [new branch] gh/justinchuby/104/base -> origin/gh/justinchuby/104/base 2025-03-17T17:41:36.0890627Z * [new branch] gh/justinchuby/104/head -> origin/gh/justinchuby/104/head 2025-03-17T17:41:36.0891610Z * [new branch] gh/justinchuby/104/orig -> origin/gh/justinchuby/104/orig 2025-03-17T17:41:36.0892887Z * [new branch] gh/justinchuby/105/base -> origin/gh/justinchuby/105/base 2025-03-17T17:41:36.0893689Z * [new branch] gh/justinchuby/105/head -> origin/gh/justinchuby/105/head 2025-03-17T17:41:36.0894725Z * [new branch] gh/justinchuby/105/orig -> origin/gh/justinchuby/105/orig 2025-03-17T17:41:36.0895930Z * [new branch] gh/justinchuby/106/base -> origin/gh/justinchuby/106/base 2025-03-17T17:41:36.0896900Z * [new branch] gh/justinchuby/106/head -> origin/gh/justinchuby/106/head 2025-03-17T17:41:36.0897876Z * [new branch] gh/justinchuby/106/orig -> origin/gh/justinchuby/106/orig 2025-03-17T17:41:36.0899202Z * [new branch] gh/justinchuby/107/base -> origin/gh/justinchuby/107/base 2025-03-17T17:41:36.0900184Z * [new branch] gh/justinchuby/107/head -> origin/gh/justinchuby/107/head 2025-03-17T17:41:36.0901160Z * [new branch] gh/justinchuby/107/orig -> origin/gh/justinchuby/107/orig 2025-03-17T17:41:36.0902446Z * [new branch] gh/justinchuby/108/base -> origin/gh/justinchuby/108/base 2025-03-17T17:41:36.0903423Z * [new branch] gh/justinchuby/108/head -> origin/gh/justinchuby/108/head 2025-03-17T17:41:36.0904458Z * [new branch] gh/justinchuby/108/orig -> origin/gh/justinchuby/108/orig 2025-03-17T17:41:36.0905747Z * [new branch] gh/justinchuby/109/base -> origin/gh/justinchuby/109/base 2025-03-17T17:41:36.0906791Z * [new branch] gh/justinchuby/109/head -> origin/gh/justinchuby/109/head 2025-03-17T17:41:36.0907883Z * [new branch] gh/justinchuby/109/orig -> origin/gh/justinchuby/109/orig 2025-03-17T17:41:36.0909216Z * [new branch] gh/justinchuby/110/base -> origin/gh/justinchuby/110/base 2025-03-17T17:41:36.0910176Z * [new branch] gh/justinchuby/110/head -> origin/gh/justinchuby/110/head 2025-03-17T17:41:36.0911598Z * [new branch] gh/justinchuby/110/orig -> origin/gh/justinchuby/110/orig 2025-03-17T17:41:36.0912889Z * [new branch] gh/justinchuby/111/base -> origin/gh/justinchuby/111/base 2025-03-17T17:41:36.0913851Z * [new branch] gh/justinchuby/111/head -> origin/gh/justinchuby/111/head 2025-03-17T17:41:36.0914764Z * [new branch] gh/justinchuby/111/orig -> origin/gh/justinchuby/111/orig 2025-03-17T17:41:36.0916005Z * [new branch] gh/justinchuby/112/base -> origin/gh/justinchuby/112/base 2025-03-17T17:41:36.0916975Z * [new branch] gh/justinchuby/112/head -> origin/gh/justinchuby/112/head 2025-03-17T17:41:36.0917933Z * [new branch] gh/justinchuby/112/orig -> origin/gh/justinchuby/112/orig 2025-03-17T17:41:36.0919283Z * [new branch] gh/justinchuby/113/base -> origin/gh/justinchuby/113/base 2025-03-17T17:41:36.0920272Z * [new branch] gh/justinchuby/113/head -> origin/gh/justinchuby/113/head 2025-03-17T17:41:36.0921233Z * [new branch] gh/justinchuby/113/orig -> origin/gh/justinchuby/113/orig 2025-03-17T17:41:36.0922518Z * [new branch] gh/justinchuby/114/base -> origin/gh/justinchuby/114/base 2025-03-17T17:41:36.0923504Z * [new branch] gh/justinchuby/114/head -> origin/gh/justinchuby/114/head 2025-03-17T17:41:36.0924526Z * [new branch] gh/justinchuby/114/orig -> origin/gh/justinchuby/114/orig 2025-03-17T17:41:36.0925796Z * [new branch] gh/justinchuby/115/base -> origin/gh/justinchuby/115/base 2025-03-17T17:41:36.0927172Z * [new branch] gh/justinchuby/115/head -> origin/gh/justinchuby/115/head 2025-03-17T17:41:36.0928135Z * [new branch] gh/justinchuby/115/orig -> origin/gh/justinchuby/115/orig 2025-03-17T17:41:36.0930024Z * [new branch] gh/kadeng/1/base -> origin/gh/kadeng/1/base 2025-03-17T17:41:36.0931035Z * [new branch] gh/kadeng/1/head -> origin/gh/kadeng/1/head 2025-03-17T17:41:36.0932407Z * [new branch] gh/kadeng/1/orig -> origin/gh/kadeng/1/orig 2025-03-17T17:41:36.0934142Z * [new branch] gh/kadeng/12/base -> origin/gh/kadeng/12/base 2025-03-17T17:41:36.0935192Z * [new branch] gh/kadeng/12/head -> origin/gh/kadeng/12/head 2025-03-17T17:41:36.0936580Z * [new branch] gh/kadeng/13/base -> origin/gh/kadeng/13/base 2025-03-17T17:41:36.0937873Z * [new branch] gh/kadeng/13/head -> origin/gh/kadeng/13/head 2025-03-17T17:41:36.0938998Z * [new branch] gh/kadeng/14/base -> origin/gh/kadeng/14/base 2025-03-17T17:41:36.0939902Z * [new branch] gh/kadeng/14/head -> origin/gh/kadeng/14/head 2025-03-17T17:41:36.0941282Z * [new branch] gh/kadeng/16/base -> origin/gh/kadeng/16/base 2025-03-17T17:41:36.0942244Z * [new branch] gh/kadeng/16/head -> origin/gh/kadeng/16/head 2025-03-17T17:41:36.0944036Z * [new branch] gh/kadeng/6/base -> origin/gh/kadeng/6/base 2025-03-17T17:41:36.0945054Z * [new branch] gh/kadeng/6/head -> origin/gh/kadeng/6/head 2025-03-17T17:41:36.0946230Z * [new branch] gh/kadeng/7/base -> origin/gh/kadeng/7/base 2025-03-17T17:41:36.0947491Z * [new branch] gh/kadeng/9/base -> origin/gh/kadeng/9/base 2025-03-17T17:41:36.0948488Z * [new branch] gh/kadeng/9/head -> origin/gh/kadeng/9/head 2025-03-17T17:41:36.0950102Z * [new branch] gh/kimishpatel/186/base -> origin/gh/kimishpatel/186/base 2025-03-17T17:41:36.0951119Z * [new branch] gh/kimishpatel/186/head -> origin/gh/kimishpatel/186/head 2025-03-17T17:41:36.0952123Z * [new branch] gh/kimishpatel/186/orig -> origin/gh/kimishpatel/186/orig 2025-03-17T17:41:36.0953678Z * [new branch] gh/kurtamohler/31/base -> origin/gh/kurtamohler/31/base 2025-03-17T17:41:36.0954637Z * [new branch] gh/kurtamohler/31/head -> origin/gh/kurtamohler/31/head 2025-03-17T17:41:36.0955598Z * [new branch] gh/kurtamohler/31/orig -> origin/gh/kurtamohler/31/orig 2025-03-17T17:41:36.0956873Z * [new branch] gh/kurtamohler/32/base -> origin/gh/kurtamohler/32/base 2025-03-17T17:41:36.0957847Z * [new branch] gh/kurtamohler/32/head -> origin/gh/kurtamohler/32/head 2025-03-17T17:41:36.0958830Z * [new branch] gh/kurtamohler/32/orig -> origin/gh/kurtamohler/32/orig 2025-03-17T17:41:36.0960469Z * [new branch] gh/kwen2501/1/base -> origin/gh/kwen2501/1/base 2025-03-17T17:41:36.0961514Z * [new branch] gh/kwen2501/1/head -> origin/gh/kwen2501/1/head 2025-03-17T17:41:36.0963085Z * [new branch] gh/kwen2501/108/base -> origin/gh/kwen2501/108/base 2025-03-17T17:41:36.0964513Z * [new branch] gh/kwen2501/108/head -> origin/gh/kwen2501/108/head 2025-03-17T17:41:36.0965554Z * [new branch] gh/kwen2501/108/orig -> origin/gh/kwen2501/108/orig 2025-03-17T17:41:36.0966863Z * [new branch] gh/kwen2501/109/base -> origin/gh/kwen2501/109/base 2025-03-17T17:41:36.0968581Z * [new branch] gh/kwen2501/109/head -> origin/gh/kwen2501/109/head 2025-03-17T17:41:36.0968959Z * [new branch] gh/kwen2501/109/orig -> origin/gh/kwen2501/109/orig 2025-03-17T17:41:36.0970248Z * [new branch] gh/kwen2501/118/base -> origin/gh/kwen2501/118/base 2025-03-17T17:41:36.0971266Z * [new branch] gh/kwen2501/118/head -> origin/gh/kwen2501/118/head 2025-03-17T17:41:36.0972235Z * [new branch] gh/kwen2501/118/orig -> origin/gh/kwen2501/118/orig 2025-03-17T17:41:36.0973961Z * [new branch] gh/kwen2501/123/base -> origin/gh/kwen2501/123/base 2025-03-17T17:41:36.0974618Z * [new branch] gh/kwen2501/123/head -> origin/gh/kwen2501/123/head 2025-03-17T17:41:36.0975633Z * [new branch] gh/kwen2501/123/orig -> origin/gh/kwen2501/123/orig 2025-03-17T17:41:36.0976989Z * [new branch] gh/kwen2501/125/base -> origin/gh/kwen2501/125/base 2025-03-17T17:41:36.0977934Z * [new branch] gh/kwen2501/125/head -> origin/gh/kwen2501/125/head 2025-03-17T17:41:36.0978918Z * [new branch] gh/kwen2501/125/orig -> origin/gh/kwen2501/125/orig 2025-03-17T17:41:36.0980194Z * [new branch] gh/kwen2501/126/base -> origin/gh/kwen2501/126/base 2025-03-17T17:41:36.0981160Z * [new branch] gh/kwen2501/126/head -> origin/gh/kwen2501/126/head 2025-03-17T17:41:36.0982125Z * [new branch] gh/kwen2501/126/orig -> origin/gh/kwen2501/126/orig 2025-03-17T17:41:36.0983393Z * [new branch] gh/kwen2501/127/base -> origin/gh/kwen2501/127/base 2025-03-17T17:41:36.0984418Z * [new branch] gh/kwen2501/127/head -> origin/gh/kwen2501/127/head 2025-03-17T17:41:36.0985529Z * [new branch] gh/kwen2501/127/orig -> origin/gh/kwen2501/127/orig 2025-03-17T17:41:36.0987228Z * [new branch] gh/kwen2501/128/base -> origin/gh/kwen2501/128/base 2025-03-17T17:41:36.0988172Z * [new branch] gh/kwen2501/128/head -> origin/gh/kwen2501/128/head 2025-03-17T17:41:36.0989319Z * [new branch] gh/kwen2501/128/orig -> origin/gh/kwen2501/128/orig 2025-03-17T17:41:36.0990690Z * [new branch] gh/kwen2501/129/base -> origin/gh/kwen2501/129/base 2025-03-17T17:41:36.0991619Z * [new branch] gh/kwen2501/129/head -> origin/gh/kwen2501/129/head 2025-03-17T17:41:36.0992577Z * [new branch] gh/kwen2501/129/orig -> origin/gh/kwen2501/129/orig 2025-03-17T17:41:36.0993994Z * [new branch] gh/kwen2501/130/base -> origin/gh/kwen2501/130/base 2025-03-17T17:41:36.0995087Z * [new branch] gh/kwen2501/130/head -> origin/gh/kwen2501/130/head 2025-03-17T17:41:36.0996092Z * [new branch] gh/kwen2501/130/orig -> origin/gh/kwen2501/130/orig 2025-03-17T17:41:36.0997287Z * [new branch] gh/kwen2501/131/base -> origin/gh/kwen2501/131/base 2025-03-17T17:41:36.0998255Z * [new branch] gh/kwen2501/131/head -> origin/gh/kwen2501/131/head 2025-03-17T17:41:36.0999244Z * [new branch] gh/kwen2501/131/orig -> origin/gh/kwen2501/131/orig 2025-03-17T17:41:36.1000660Z * [new branch] gh/kwen2501/132/base -> origin/gh/kwen2501/132/base 2025-03-17T17:41:36.1002022Z * [new branch] gh/kwen2501/132/head -> origin/gh/kwen2501/132/head 2025-03-17T17:41:36.1003014Z * [new branch] gh/kwen2501/132/orig -> origin/gh/kwen2501/132/orig 2025-03-17T17:41:36.1004166Z * [new branch] gh/kwen2501/133/base -> origin/gh/kwen2501/133/base 2025-03-17T17:41:36.1005216Z * [new branch] gh/kwen2501/133/head -> origin/gh/kwen2501/133/head 2025-03-17T17:41:36.1006194Z * [new branch] gh/kwen2501/133/orig -> origin/gh/kwen2501/133/orig 2025-03-17T17:41:36.1007365Z * [new branch] gh/kwen2501/134/base -> origin/gh/kwen2501/134/base 2025-03-17T17:41:36.1008441Z * [new branch] gh/kwen2501/134/head -> origin/gh/kwen2501/134/head 2025-03-17T17:41:36.1009448Z * [new branch] gh/kwen2501/134/orig -> origin/gh/kwen2501/134/orig 2025-03-17T17:41:36.1010772Z * [new branch] gh/kwen2501/15/base -> origin/gh/kwen2501/15/base 2025-03-17T17:41:36.1011722Z * [new branch] gh/kwen2501/15/head -> origin/gh/kwen2501/15/head 2025-03-17T17:41:36.1013335Z * [new branch] gh/kwen2501/87/base -> origin/gh/kwen2501/87/base 2025-03-17T17:41:36.1014258Z * [new branch] gh/kwen2501/87/head -> origin/gh/kwen2501/87/head 2025-03-17T17:41:36.1015309Z * [new branch] gh/kwen2501/87/orig -> origin/gh/kwen2501/87/orig 2025-03-17T17:41:36.1017084Z * [new branch] gh/kwen2501/97/base -> origin/gh/kwen2501/97/base 2025-03-17T17:41:36.1018119Z * [new branch] gh/kwen2501/97/head -> origin/gh/kwen2501/97/head 2025-03-17T17:41:36.1019097Z * [new branch] gh/kwen2501/97/orig -> origin/gh/kwen2501/97/orig 2025-03-17T17:41:36.1020703Z * [new branch] gh/laithsakka/107/base -> origin/gh/laithsakka/107/base 2025-03-17T17:41:36.1021714Z * [new branch] gh/laithsakka/107/head -> origin/gh/laithsakka/107/head 2025-03-17T17:41:36.1022906Z * [new branch] gh/laithsakka/107/orig -> origin/gh/laithsakka/107/orig 2025-03-17T17:41:36.1024430Z * [new branch] gh/laithsakka/108/base -> origin/gh/laithsakka/108/base 2025-03-17T17:41:36.1025415Z * [new branch] gh/laithsakka/108/head -> origin/gh/laithsakka/108/head 2025-03-17T17:41:36.1026455Z * [new branch] gh/laithsakka/108/orig -> origin/gh/laithsakka/108/orig 2025-03-17T17:41:36.1028299Z * [new branch] gh/laithsakka/109/base -> origin/gh/laithsakka/109/base 2025-03-17T17:41:36.1029359Z * [new branch] gh/laithsakka/109/head -> origin/gh/laithsakka/109/head 2025-03-17T17:41:36.1030297Z * [new branch] gh/laithsakka/109/orig -> origin/gh/laithsakka/109/orig 2025-03-17T17:41:36.1031570Z * [new branch] gh/laithsakka/110/base -> origin/gh/laithsakka/110/base 2025-03-17T17:41:36.1032584Z * [new branch] gh/laithsakka/110/head -> origin/gh/laithsakka/110/head 2025-03-17T17:41:36.1033533Z * [new branch] gh/laithsakka/110/orig -> origin/gh/laithsakka/110/orig 2025-03-17T17:41:36.1034983Z * [new branch] gh/laithsakka/111/base -> origin/gh/laithsakka/111/base 2025-03-17T17:41:36.1035995Z * [new branch] gh/laithsakka/111/head -> origin/gh/laithsakka/111/head 2025-03-17T17:41:36.1037153Z * [new branch] gh/laithsakka/111/orig -> origin/gh/laithsakka/111/orig 2025-03-17T17:41:36.1039684Z * [new branch] gh/laithsakka/112/base -> origin/gh/laithsakka/112/base 2025-03-17T17:41:36.1040879Z * [new branch] gh/laithsakka/112/head -> origin/gh/laithsakka/112/head 2025-03-17T17:41:36.1041920Z * [new branch] gh/laithsakka/112/orig -> origin/gh/laithsakka/112/orig 2025-03-17T17:41:36.1043644Z * [new branch] gh/laithsakka/113/base -> origin/gh/laithsakka/113/base 2025-03-17T17:41:36.1044632Z * [new branch] gh/laithsakka/113/head -> origin/gh/laithsakka/113/head 2025-03-17T17:41:36.1045597Z * [new branch] gh/laithsakka/113/orig -> origin/gh/laithsakka/113/orig 2025-03-17T17:41:36.1047052Z * [new branch] gh/laithsakka/114/base -> origin/gh/laithsakka/114/base 2025-03-17T17:41:36.1048082Z * [new branch] gh/laithsakka/114/head -> origin/gh/laithsakka/114/head 2025-03-17T17:41:36.1049502Z * [new branch] gh/laithsakka/114/orig -> origin/gh/laithsakka/114/orig 2025-03-17T17:41:36.1050997Z * [new branch] gh/laithsakka/115/base -> origin/gh/laithsakka/115/base 2025-03-17T17:41:36.1052000Z * [new branch] gh/laithsakka/115/head -> origin/gh/laithsakka/115/head 2025-03-17T17:41:36.1052967Z * [new branch] gh/laithsakka/115/orig -> origin/gh/laithsakka/115/orig 2025-03-17T17:41:36.1054259Z * [new branch] gh/laithsakka/116/base -> origin/gh/laithsakka/116/base 2025-03-17T17:41:36.1055176Z * [new branch] gh/laithsakka/116/head -> origin/gh/laithsakka/116/head 2025-03-17T17:41:36.1056280Z * [new branch] gh/laithsakka/116/orig -> origin/gh/laithsakka/116/orig 2025-03-17T17:41:36.1057481Z * [new branch] gh/laithsakka/117/base -> origin/gh/laithsakka/117/base 2025-03-17T17:41:36.1058505Z * [new branch] gh/laithsakka/117/head -> origin/gh/laithsakka/117/head 2025-03-17T17:41:36.1059492Z * [new branch] gh/laithsakka/117/orig -> origin/gh/laithsakka/117/orig 2025-03-17T17:41:36.1060861Z * [new branch] gh/laithsakka/118/base -> origin/gh/laithsakka/118/base 2025-03-17T17:41:36.1061858Z * [new branch] gh/laithsakka/118/head -> origin/gh/laithsakka/118/head 2025-03-17T17:41:36.1062797Z * [new branch] gh/laithsakka/118/orig -> origin/gh/laithsakka/118/orig 2025-03-17T17:41:36.1064112Z * [new branch] gh/laithsakka/119/base -> origin/gh/laithsakka/119/base 2025-03-17T17:41:36.1065126Z * [new branch] gh/laithsakka/119/head -> origin/gh/laithsakka/119/head 2025-03-17T17:41:36.1066078Z * [new branch] gh/laithsakka/119/orig -> origin/gh/laithsakka/119/orig 2025-03-17T17:41:36.1067417Z * [new branch] gh/laithsakka/120/base -> origin/gh/laithsakka/120/base 2025-03-17T17:41:36.1068412Z * [new branch] gh/laithsakka/120/head -> origin/gh/laithsakka/120/head 2025-03-17T17:41:36.1069447Z * [new branch] gh/laithsakka/120/orig -> origin/gh/laithsakka/120/orig 2025-03-17T17:41:36.1070772Z * [new branch] gh/laithsakka/121/base -> origin/gh/laithsakka/121/base 2025-03-17T17:41:36.1071742Z * [new branch] gh/laithsakka/121/head -> origin/gh/laithsakka/121/head 2025-03-17T17:41:36.1072702Z * [new branch] gh/laithsakka/121/orig -> origin/gh/laithsakka/121/orig 2025-03-17T17:41:36.1074043Z * [new branch] gh/laithsakka/122/base -> origin/gh/laithsakka/122/base 2025-03-17T17:41:36.1075064Z * [new branch] gh/laithsakka/122/head -> origin/gh/laithsakka/122/head 2025-03-17T17:41:36.1076030Z * [new branch] gh/laithsakka/122/orig -> origin/gh/laithsakka/122/orig 2025-03-17T17:41:36.1077456Z * [new branch] gh/laithsakka/28/base -> origin/gh/laithsakka/28/base 2025-03-17T17:41:36.1078984Z * [new branch] gh/laithsakka/29/base -> origin/gh/laithsakka/29/base 2025-03-17T17:41:36.1080202Z * [new branch] gh/laithsakka/30/base -> origin/gh/laithsakka/30/base 2025-03-17T17:41:36.1081196Z * [new branch] gh/laithsakka/30/head -> origin/gh/laithsakka/30/head 2025-03-17T17:41:36.1082404Z * [new branch] gh/laithsakka/31/base -> origin/gh/laithsakka/31/base 2025-03-17T17:41:36.1083292Z * [new branch] gh/laithsakka/31/head -> origin/gh/laithsakka/31/head 2025-03-17T17:41:36.1084942Z * [new branch] gh/laithsakka/32/base -> origin/gh/laithsakka/32/base 2025-03-17T17:41:36.1085854Z * [new branch] gh/laithsakka/32/head -> origin/gh/laithsakka/32/head 2025-03-17T17:41:36.1087432Z * [new branch] gh/larryliu0820/46/base -> origin/gh/larryliu0820/46/base 2025-03-17T17:41:36.1088609Z * [new branch] gh/larryliu0820/46/head -> origin/gh/larryliu0820/46/head 2025-03-17T17:41:36.1089816Z * [new branch] gh/larryliu0820/46/orig -> origin/gh/larryliu0820/46/orig 2025-03-17T17:41:36.1091415Z * [new branch] gh/leslie-fang-intel/180/base -> origin/gh/leslie-fang-intel/180/base 2025-03-17T17:41:36.1092389Z * [new branch] gh/leslie-fang-intel/180/head -> origin/gh/leslie-fang-intel/180/head 2025-03-17T17:41:36.1093395Z * [new branch] gh/leslie-fang-intel/180/orig -> origin/gh/leslie-fang-intel/180/orig 2025-03-17T17:41:36.1094691Z * [new branch] gh/leslie-fang-intel/181/base -> origin/gh/leslie-fang-intel/181/base 2025-03-17T17:41:36.1095745Z * [new branch] gh/leslie-fang-intel/181/head -> origin/gh/leslie-fang-intel/181/head 2025-03-17T17:41:36.1096578Z * [new branch] gh/leslie-fang-intel/181/orig -> origin/gh/leslie-fang-intel/181/orig 2025-03-17T17:41:36.1097922Z * [new branch] gh/leslie-fang-intel/182/base -> origin/gh/leslie-fang-intel/182/base 2025-03-17T17:41:36.1098864Z * [new branch] gh/leslie-fang-intel/182/head -> origin/gh/leslie-fang-intel/182/head 2025-03-17T17:41:36.1099818Z * [new branch] gh/leslie-fang-intel/182/orig -> origin/gh/leslie-fang-intel/182/orig 2025-03-17T17:41:36.1101203Z * [new branch] gh/leslie-fang-intel/183/base -> origin/gh/leslie-fang-intel/183/base 2025-03-17T17:41:36.1102155Z * [new branch] gh/leslie-fang-intel/183/head -> origin/gh/leslie-fang-intel/183/head 2025-03-17T17:41:36.1103214Z * [new branch] gh/leslie-fang-intel/183/orig -> origin/gh/leslie-fang-intel/183/orig 2025-03-17T17:41:36.1104576Z * [new branch] gh/leslie-fang-intel/184/base -> origin/gh/leslie-fang-intel/184/base 2025-03-17T17:41:36.1106015Z * [new branch] gh/leslie-fang-intel/184/head -> origin/gh/leslie-fang-intel/184/head 2025-03-17T17:41:36.1107142Z * [new branch] gh/leslie-fang-intel/184/orig -> origin/gh/leslie-fang-intel/184/orig 2025-03-17T17:41:36.1108454Z * [new branch] gh/leslie-fang-intel/185/base -> origin/gh/leslie-fang-intel/185/base 2025-03-17T17:41:36.1109391Z * [new branch] gh/leslie-fang-intel/185/head -> origin/gh/leslie-fang-intel/185/head 2025-03-17T17:41:36.1110334Z * [new branch] gh/leslie-fang-intel/185/orig -> origin/gh/leslie-fang-intel/185/orig 2025-03-17T17:41:36.1111645Z * [new branch] gh/leslie-fang-intel/186/base -> origin/gh/leslie-fang-intel/186/base 2025-03-17T17:41:36.1112598Z * [new branch] gh/leslie-fang-intel/186/head -> origin/gh/leslie-fang-intel/186/head 2025-03-17T17:41:36.1113511Z * [new branch] gh/leslie-fang-intel/186/orig -> origin/gh/leslie-fang-intel/186/orig 2025-03-17T17:41:36.1114855Z * [new branch] gh/leslie-fang-intel/187/base -> origin/gh/leslie-fang-intel/187/base 2025-03-17T17:41:36.1115868Z * [new branch] gh/leslie-fang-intel/187/head -> origin/gh/leslie-fang-intel/187/head 2025-03-17T17:41:36.1116806Z * [new branch] gh/leslie-fang-intel/187/orig -> origin/gh/leslie-fang-intel/187/orig 2025-03-17T17:41:36.1118116Z * [new branch] gh/leslie-fang-intel/188/base -> origin/gh/leslie-fang-intel/188/base 2025-03-17T17:41:36.1118962Z * [new branch] gh/leslie-fang-intel/188/head -> origin/gh/leslie-fang-intel/188/head 2025-03-17T17:41:36.1119974Z * [new branch] gh/leslie-fang-intel/188/orig -> origin/gh/leslie-fang-intel/188/orig 2025-03-17T17:41:36.1121552Z * [new branch] gh/lw/5/head -> origin/gh/lw/5/head 2025-03-17T17:41:36.1123311Z * [new branch] gh/lw/6/base -> origin/gh/lw/6/base 2025-03-17T17:41:36.1124261Z * [new branch] gh/lw/6/head -> origin/gh/lw/6/head 2025-03-17T17:41:36.1125426Z * [new branch] gh/lw/6/orig -> origin/gh/lw/6/orig 2025-03-17T17:41:36.1126719Z * [new branch] gh/lw/7/base -> origin/gh/lw/7/base 2025-03-17T17:41:36.1127681Z * [new branch] gh/lw/7/head -> origin/gh/lw/7/head 2025-03-17T17:41:36.1128702Z * [new branch] gh/lw/7/orig -> origin/gh/lw/7/orig 2025-03-17T17:41:36.1129982Z * [new branch] gh/lw/8/base -> origin/gh/lw/8/base 2025-03-17T17:41:36.1130986Z * [new branch] gh/lw/8/head -> origin/gh/lw/8/head 2025-03-17T17:41:36.1131993Z * [new branch] gh/lw/8/orig -> origin/gh/lw/8/orig 2025-03-17T17:41:36.1133624Z * [new branch] gh/malfet/14/base -> origin/gh/malfet/14/base 2025-03-17T17:41:36.1134837Z * [new branch] gh/malfet/155/base -> origin/gh/malfet/155/base 2025-03-17T17:41:36.1135751Z * [new branch] gh/malfet/155/head -> origin/gh/malfet/155/head 2025-03-17T17:41:36.1136985Z * [new branch] gh/malfet/155/orig -> origin/gh/malfet/155/orig 2025-03-17T17:41:36.1138383Z * [new branch] gh/malfet/159/base -> origin/gh/malfet/159/base 2025-03-17T17:41:36.1139372Z * [new branch] gh/malfet/159/head -> origin/gh/malfet/159/head 2025-03-17T17:41:36.1140323Z * [new branch] gh/malfet/159/orig -> origin/gh/malfet/159/orig 2025-03-17T17:41:36.1141639Z * [new branch] gh/malfet/169/base -> origin/gh/malfet/169/base 2025-03-17T17:41:36.1142557Z * [new branch] gh/malfet/169/head -> origin/gh/malfet/169/head 2025-03-17T17:41:36.1143679Z * [new branch] gh/malfet/169/orig -> origin/gh/malfet/169/orig 2025-03-17T17:41:36.1145020Z * [new branch] gh/malfet/178/base -> origin/gh/malfet/178/base 2025-03-17T17:41:36.1145973Z * [new branch] gh/malfet/178/head -> origin/gh/malfet/178/head 2025-03-17T17:41:36.1147068Z * [new branch] gh/malfet/178/orig -> origin/gh/malfet/178/orig 2025-03-17T17:41:36.1148307Z * [new branch] gh/malfet/179/base -> origin/gh/malfet/179/base 2025-03-17T17:41:36.1149263Z * [new branch] gh/malfet/179/head -> origin/gh/malfet/179/head 2025-03-17T17:41:36.1150217Z * [new branch] gh/malfet/179/orig -> origin/gh/malfet/179/orig 2025-03-17T17:41:36.1151525Z * [new branch] gh/malfet/180/base -> origin/gh/malfet/180/base 2025-03-17T17:41:36.1152465Z * [new branch] gh/malfet/180/head -> origin/gh/malfet/180/head 2025-03-17T17:41:36.1153518Z * [new branch] gh/malfet/180/orig -> origin/gh/malfet/180/orig 2025-03-17T17:41:36.1154782Z * [new branch] gh/malfet/181/base -> origin/gh/malfet/181/base 2025-03-17T17:41:36.1155690Z * [new branch] gh/malfet/181/head -> origin/gh/malfet/181/head 2025-03-17T17:41:36.1157098Z * [new branch] gh/malfet/181/orig -> origin/gh/malfet/181/orig 2025-03-17T17:41:36.1158391Z * [new branch] gh/malfet/182/base -> origin/gh/malfet/182/base 2025-03-17T17:41:36.1159394Z * [new branch] gh/malfet/182/head -> origin/gh/malfet/182/head 2025-03-17T17:41:36.1160510Z * [new branch] gh/malfet/182/orig -> origin/gh/malfet/182/orig 2025-03-17T17:41:36.1161823Z * [new branch] gh/malfet/183/base -> origin/gh/malfet/183/base 2025-03-17T17:41:36.1162805Z * [new branch] gh/malfet/183/head -> origin/gh/malfet/183/head 2025-03-17T17:41:36.1163831Z * [new branch] gh/malfet/183/orig -> origin/gh/malfet/183/orig 2025-03-17T17:41:36.1165087Z * [new branch] gh/malfet/184/base -> origin/gh/malfet/184/base 2025-03-17T17:41:36.1166008Z * [new branch] gh/malfet/184/head -> origin/gh/malfet/184/head 2025-03-17T17:41:36.1167025Z * [new branch] gh/malfet/184/orig -> origin/gh/malfet/184/orig 2025-03-17T17:41:36.1168224Z * [new branch] gh/malfet/185/base -> origin/gh/malfet/185/base 2025-03-17T17:41:36.1169251Z * [new branch] gh/malfet/185/head -> origin/gh/malfet/185/head 2025-03-17T17:41:36.1170216Z * [new branch] gh/malfet/185/orig -> origin/gh/malfet/185/orig 2025-03-17T17:41:36.1171492Z * [new branch] gh/malfet/186/base -> origin/gh/malfet/186/base 2025-03-17T17:41:36.1172423Z * [new branch] gh/malfet/186/head -> origin/gh/malfet/186/head 2025-03-17T17:41:36.1173554Z * [new branch] gh/malfet/186/orig -> origin/gh/malfet/186/orig 2025-03-17T17:41:36.1174725Z * [new branch] gh/malfet/187/base -> origin/gh/malfet/187/base 2025-03-17T17:41:36.1175699Z * [new branch] gh/malfet/187/head -> origin/gh/malfet/187/head 2025-03-17T17:41:36.1176730Z * [new branch] gh/malfet/187/orig -> origin/gh/malfet/187/orig 2025-03-17T17:41:36.1178035Z * [new branch] gh/malfet/188/base -> origin/gh/malfet/188/base 2025-03-17T17:41:36.1178997Z * [new branch] gh/malfet/188/head -> origin/gh/malfet/188/head 2025-03-17T17:41:36.1179952Z * [new branch] gh/malfet/188/orig -> origin/gh/malfet/188/orig 2025-03-17T17:41:36.1181232Z * [new branch] gh/malfet/189/base -> origin/gh/malfet/189/base 2025-03-17T17:41:36.1182155Z * [new branch] gh/malfet/189/head -> origin/gh/malfet/189/head 2025-03-17T17:41:36.1183573Z * [new branch] gh/malfet/190/base -> origin/gh/malfet/190/base 2025-03-17T17:41:36.1184703Z * [new branch] gh/malfet/190/head -> origin/gh/malfet/190/head 2025-03-17T17:41:36.1185810Z * [new branch] gh/malfet/190/orig -> origin/gh/malfet/190/orig 2025-03-17T17:41:36.1187778Z * [new branch] gh/malfet/191/base -> origin/gh/malfet/191/base 2025-03-17T17:41:36.1188909Z * [new branch] gh/malfet/191/head -> origin/gh/malfet/191/head 2025-03-17T17:41:36.1190129Z * [new branch] gh/malfet/191/orig -> origin/gh/malfet/191/orig 2025-03-17T17:41:36.1191882Z * [new branch] gh/malfet/192/base -> origin/gh/malfet/192/base 2025-03-17T17:41:36.1192989Z * [new branch] gh/malfet/192/head -> origin/gh/malfet/192/head 2025-03-17T17:41:36.1194145Z * [new branch] gh/malfet/192/orig -> origin/gh/malfet/192/orig 2025-03-17T17:41:36.1195532Z * [new branch] gh/malfet/193/base -> origin/gh/malfet/193/base 2025-03-17T17:41:36.1196569Z * [new branch] gh/malfet/193/head -> origin/gh/malfet/193/head 2025-03-17T17:41:36.1197715Z * [new branch] gh/malfet/193/orig -> origin/gh/malfet/193/orig 2025-03-17T17:41:36.1199308Z * [new branch] gh/malfet/194/base -> origin/gh/malfet/194/base 2025-03-17T17:41:36.1200446Z * [new branch] gh/malfet/194/head -> origin/gh/malfet/194/head 2025-03-17T17:41:36.1201531Z * [new branch] gh/malfet/194/orig -> origin/gh/malfet/194/orig 2025-03-17T17:41:36.1202956Z * [new branch] gh/malfet/195/base -> origin/gh/malfet/195/base 2025-03-17T17:41:36.1204097Z * [new branch] gh/malfet/195/head -> origin/gh/malfet/195/head 2025-03-17T17:41:36.1205193Z * [new branch] gh/malfet/195/orig -> origin/gh/malfet/195/orig 2025-03-17T17:41:36.1206654Z * [new branch] gh/malfet/196/base -> origin/gh/malfet/196/base 2025-03-17T17:41:36.1207672Z * [new branch] gh/malfet/196/head -> origin/gh/malfet/196/head 2025-03-17T17:41:36.1208857Z * [new branch] gh/malfet/196/orig -> origin/gh/malfet/196/orig 2025-03-17T17:41:36.1210360Z * [new branch] gh/malfet/197/base -> origin/gh/malfet/197/base 2025-03-17T17:41:36.1211447Z * [new branch] gh/malfet/197/head -> origin/gh/malfet/197/head 2025-03-17T17:41:36.1212552Z * [new branch] gh/malfet/197/orig -> origin/gh/malfet/197/orig 2025-03-17T17:41:36.1213983Z * [new branch] gh/malfet/198/base -> origin/gh/malfet/198/base 2025-03-17T17:41:36.1215100Z * [new branch] gh/malfet/198/head -> origin/gh/malfet/198/head 2025-03-17T17:41:36.1216221Z * [new branch] gh/malfet/198/orig -> origin/gh/malfet/198/orig 2025-03-17T17:41:36.1217783Z * [new branch] gh/malfet/199/base -> origin/gh/malfet/199/base 2025-03-17T17:41:36.1218820Z * [new branch] gh/malfet/199/head -> origin/gh/malfet/199/head 2025-03-17T17:41:36.1220016Z * [new branch] gh/malfet/199/orig -> origin/gh/malfet/199/orig 2025-03-17T17:41:36.1221478Z * [new branch] gh/malfet/200/base -> origin/gh/malfet/200/base 2025-03-17T17:41:36.1222943Z * [new branch] gh/malfet/200/head -> origin/gh/malfet/200/head 2025-03-17T17:41:36.1224259Z * [new branch] gh/malfet/200/orig -> origin/gh/malfet/200/orig 2025-03-17T17:41:36.1225834Z * [new branch] gh/malfet/201/base -> origin/gh/malfet/201/base 2025-03-17T17:41:36.1226996Z * [new branch] gh/malfet/201/head -> origin/gh/malfet/201/head 2025-03-17T17:41:36.1228267Z * [new branch] gh/malfet/201/orig -> origin/gh/malfet/201/orig 2025-03-17T17:41:36.1229645Z * [new branch] gh/malfet/202/base -> origin/gh/malfet/202/base 2025-03-17T17:41:36.1230712Z * [new branch] gh/malfet/202/head -> origin/gh/malfet/202/head 2025-03-17T17:41:36.1231870Z * [new branch] gh/malfet/202/orig -> origin/gh/malfet/202/orig 2025-03-17T17:41:36.1233193Z * [new branch] gh/malfet/203/base -> origin/gh/malfet/203/base 2025-03-17T17:41:36.1234415Z * [new branch] gh/malfet/203/head -> origin/gh/malfet/203/head 2025-03-17T17:41:36.1235547Z * [new branch] gh/malfet/203/orig -> origin/gh/malfet/203/orig 2025-03-17T17:41:36.1237153Z * [new branch] gh/malfet/204/base -> origin/gh/malfet/204/base 2025-03-17T17:41:36.1238318Z * [new branch] gh/malfet/204/head -> origin/gh/malfet/204/head 2025-03-17T17:41:36.1239400Z * [new branch] gh/malfet/204/orig -> origin/gh/malfet/204/orig 2025-03-17T17:41:36.1240886Z * [new branch] gh/malfet/205/base -> origin/gh/malfet/205/base 2025-03-17T17:41:36.1241960Z * [new branch] gh/malfet/205/head -> origin/gh/malfet/205/head 2025-03-17T17:41:36.1243561Z * [new branch] gh/malfet/205/orig -> origin/gh/malfet/205/orig 2025-03-17T17:41:36.1244995Z * [new branch] gh/malfet/206/base -> origin/gh/malfet/206/base 2025-03-17T17:41:36.1246092Z * [new branch] gh/malfet/206/head -> origin/gh/malfet/206/head 2025-03-17T17:41:36.1247648Z * [new branch] gh/malfet/206/orig -> origin/gh/malfet/206/orig 2025-03-17T17:41:36.1249034Z * [new branch] gh/malfet/207/base -> origin/gh/malfet/207/base 2025-03-17T17:41:36.1250542Z * [new branch] gh/malfet/207/head -> origin/gh/malfet/207/head 2025-03-17T17:41:36.1251719Z * [new branch] gh/malfet/207/orig -> origin/gh/malfet/207/orig 2025-03-17T17:41:36.1253202Z * [new branch] gh/malfet/208/base -> origin/gh/malfet/208/base 2025-03-17T17:41:36.1254279Z * [new branch] gh/malfet/208/head -> origin/gh/malfet/208/head 2025-03-17T17:41:36.1255460Z * [new branch] gh/malfet/208/orig -> origin/gh/malfet/208/orig 2025-03-17T17:41:36.1256810Z * [new branch] gh/malfet/209/base -> origin/gh/malfet/209/base 2025-03-17T17:41:36.1257911Z * [new branch] gh/malfet/209/head -> origin/gh/malfet/209/head 2025-03-17T17:41:36.1259018Z * [new branch] gh/malfet/209/orig -> origin/gh/malfet/209/orig 2025-03-17T17:41:36.1260563Z * [new branch] gh/malfet/210/base -> origin/gh/malfet/210/base 2025-03-17T17:41:36.1261588Z * [new branch] gh/malfet/210/head -> origin/gh/malfet/210/head 2025-03-17T17:41:36.1262846Z * [new branch] gh/malfet/210/orig -> origin/gh/malfet/210/orig 2025-03-17T17:41:36.1264137Z * [new branch] gh/malfet/211/base -> origin/gh/malfet/211/base 2025-03-17T17:41:36.1265194Z * [new branch] gh/malfet/211/head -> origin/gh/malfet/211/head 2025-03-17T17:41:36.1266286Z * [new branch] gh/malfet/211/orig -> origin/gh/malfet/211/orig 2025-03-17T17:41:36.1267823Z * [new branch] gh/malfet/212/base -> origin/gh/malfet/212/base 2025-03-17T17:41:36.1268917Z * [new branch] gh/malfet/212/head -> origin/gh/malfet/212/head 2025-03-17T17:41:36.1270083Z * [new branch] gh/malfet/212/orig -> origin/gh/malfet/212/orig 2025-03-17T17:41:36.1271580Z * [new branch] gh/malfet/213/base -> origin/gh/malfet/213/base 2025-03-17T17:41:36.1272663Z * [new branch] gh/malfet/213/head -> origin/gh/malfet/213/head 2025-03-17T17:41:36.1273821Z * [new branch] gh/malfet/213/orig -> origin/gh/malfet/213/orig 2025-03-17T17:41:36.1275337Z * [new branch] gh/malfet/214/base -> origin/gh/malfet/214/base 2025-03-17T17:41:36.1276369Z * [new branch] gh/malfet/214/head -> origin/gh/malfet/214/head 2025-03-17T17:41:36.1277480Z * [new branch] gh/malfet/214/orig -> origin/gh/malfet/214/orig 2025-03-17T17:41:36.1278814Z * [new branch] gh/malfet/215/base -> origin/gh/malfet/215/base 2025-03-17T17:41:36.1280008Z * [new branch] gh/malfet/215/head -> origin/gh/malfet/215/head 2025-03-17T17:41:36.1281193Z * [new branch] gh/malfet/215/orig -> origin/gh/malfet/215/orig 2025-03-17T17:41:36.1282552Z * [new branch] gh/malfet/216/base -> origin/gh/malfet/216/base 2025-03-17T17:41:36.1283662Z * [new branch] gh/malfet/216/head -> origin/gh/malfet/216/head 2025-03-17T17:41:36.1284849Z * [new branch] gh/malfet/216/orig -> origin/gh/malfet/216/orig 2025-03-17T17:41:36.1286361Z * [new branch] gh/malfet/217/base -> origin/gh/malfet/217/base 2025-03-17T17:41:36.1287411Z * [new branch] gh/malfet/217/head -> origin/gh/malfet/217/head 2025-03-17T17:41:36.1288545Z * [new branch] gh/malfet/217/orig -> origin/gh/malfet/217/orig 2025-03-17T17:41:36.1289829Z * [new branch] gh/malfet/218/base -> origin/gh/malfet/218/base 2025-03-17T17:41:36.1290915Z * [new branch] gh/malfet/218/head -> origin/gh/malfet/218/head 2025-03-17T17:41:36.1292007Z * [new branch] gh/malfet/218/orig -> origin/gh/malfet/218/orig 2025-03-17T17:41:36.1293505Z * [new branch] gh/malfet/219/base -> origin/gh/malfet/219/base 2025-03-17T17:41:36.1294606Z * [new branch] gh/malfet/219/head -> origin/gh/malfet/219/head 2025-03-17T17:41:36.1295759Z * [new branch] gh/malfet/219/orig -> origin/gh/malfet/219/orig 2025-03-17T17:41:36.1297174Z * [new branch] gh/malfet/220/base -> origin/gh/malfet/220/base 2025-03-17T17:41:36.1298269Z * [new branch] gh/malfet/220/head -> origin/gh/malfet/220/head 2025-03-17T17:41:36.1299363Z * [new branch] gh/malfet/220/orig -> origin/gh/malfet/220/orig 2025-03-17T17:41:36.1300748Z * [new branch] gh/malfet/221/base -> origin/gh/malfet/221/base 2025-03-17T17:41:36.1301907Z * [new branch] gh/malfet/221/head -> origin/gh/malfet/221/head 2025-03-17T17:41:36.1302962Z * [new branch] gh/malfet/221/orig -> origin/gh/malfet/221/orig 2025-03-17T17:41:36.1304486Z * [new branch] gh/malfet/222/base -> origin/gh/malfet/222/base 2025-03-17T17:41:36.1305551Z * [new branch] gh/malfet/222/head -> origin/gh/malfet/222/head 2025-03-17T17:41:36.1306911Z * [new branch] gh/malfet/222/orig -> origin/gh/malfet/222/orig 2025-03-17T17:41:36.1308292Z * [new branch] gh/malfet/223/base -> origin/gh/malfet/223/base 2025-03-17T17:41:36.1309869Z * [new branch] gh/malfet/223/head -> origin/gh/malfet/223/head 2025-03-17T17:41:36.1310956Z * [new branch] gh/malfet/223/orig -> origin/gh/malfet/223/orig 2025-03-17T17:41:36.1312412Z * [new branch] gh/malfet/224/base -> origin/gh/malfet/224/base 2025-03-17T17:41:36.1313501Z * [new branch] gh/malfet/224/head -> origin/gh/malfet/224/head 2025-03-17T17:41:36.1314600Z * [new branch] gh/malfet/224/orig -> origin/gh/malfet/224/orig 2025-03-17T17:41:36.1316076Z * [new branch] gh/malfet/225/base -> origin/gh/malfet/225/base 2025-03-17T17:41:36.1317105Z * [new branch] gh/malfet/225/head -> origin/gh/malfet/225/head 2025-03-17T17:41:36.1318261Z * [new branch] gh/malfet/225/orig -> origin/gh/malfet/225/orig 2025-03-17T17:41:36.1319602Z * [new branch] gh/malfet/226/base -> origin/gh/malfet/226/base 2025-03-17T17:41:36.1320699Z * [new branch] gh/malfet/226/head -> origin/gh/malfet/226/head 2025-03-17T17:41:36.1321809Z * [new branch] gh/malfet/226/orig -> origin/gh/malfet/226/orig 2025-03-17T17:41:36.1323403Z * [new branch] gh/malfet/227/base -> origin/gh/malfet/227/base 2025-03-17T17:41:36.1324436Z * [new branch] gh/malfet/227/head -> origin/gh/malfet/227/head 2025-03-17T17:41:36.1325468Z * [new branch] gh/malfet/227/orig -> origin/gh/malfet/227/orig 2025-03-17T17:41:36.1326805Z * [new branch] gh/malfet/228/base -> origin/gh/malfet/228/base 2025-03-17T17:41:36.1327901Z * [new branch] gh/malfet/228/head -> origin/gh/malfet/228/head 2025-03-17T17:41:36.1329143Z * [new branch] gh/malfet/228/orig -> origin/gh/malfet/228/orig 2025-03-17T17:41:36.1330902Z * [new branch] gh/malfet/229/base -> origin/gh/malfet/229/base 2025-03-17T17:41:36.1332068Z * [new branch] gh/malfet/229/head -> origin/gh/malfet/229/head 2025-03-17T17:41:36.1333162Z * [new branch] gh/malfet/229/orig -> origin/gh/malfet/229/orig 2025-03-17T17:41:36.1334619Z * [new branch] gh/malfet/230/base -> origin/gh/malfet/230/base 2025-03-17T17:41:36.1335668Z * [new branch] gh/malfet/230/head -> origin/gh/malfet/230/head 2025-03-17T17:41:36.1336915Z * [new branch] gh/malfet/230/orig -> origin/gh/malfet/230/orig 2025-03-17T17:41:36.1341021Z * [new branch] gh/malfet/231/base -> origin/gh/malfet/231/base 2025-03-17T17:41:36.1341989Z * [new branch] gh/malfet/231/head -> origin/gh/malfet/231/head 2025-03-17T17:41:36.1343148Z * [new branch] gh/malfet/231/orig -> origin/gh/malfet/231/orig 2025-03-17T17:41:36.1344488Z * [new branch] gh/malfet/232/base -> origin/gh/malfet/232/base 2025-03-17T17:41:36.1345664Z * [new branch] gh/malfet/232/head -> origin/gh/malfet/232/head 2025-03-17T17:41:36.1346911Z * [new branch] gh/malfet/232/orig -> origin/gh/malfet/232/orig 2025-03-17T17:41:36.1348439Z * [new branch] gh/malfet/233/base -> origin/gh/malfet/233/base 2025-03-17T17:41:36.1349530Z * [new branch] gh/malfet/233/head -> origin/gh/malfet/233/head 2025-03-17T17:41:36.1350698Z * [new branch] gh/malfet/233/orig -> origin/gh/malfet/233/orig 2025-03-17T17:41:36.1352087Z * [new branch] gh/malfet/234/base -> origin/gh/malfet/234/base 2025-03-17T17:41:36.1353333Z * [new branch] gh/malfet/234/head -> origin/gh/malfet/234/head 2025-03-17T17:41:36.1354289Z * [new branch] gh/malfet/234/orig -> origin/gh/malfet/234/orig 2025-03-17T17:41:36.1355747Z * [new branch] gh/malfet/235/base -> origin/gh/malfet/235/base 2025-03-17T17:41:36.1356811Z * [new branch] gh/malfet/235/head -> origin/gh/malfet/235/head 2025-03-17T17:41:36.1357942Z * [new branch] gh/malfet/235/orig -> origin/gh/malfet/235/orig 2025-03-17T17:41:36.1359917Z * [new branch] gh/malfet/64/base -> origin/gh/malfet/64/base 2025-03-17T17:41:36.1360996Z * [new branch] gh/malfet/64/head -> origin/gh/malfet/64/head 2025-03-17T17:41:36.1362818Z * [new branch] gh/malfet/96/base -> origin/gh/malfet/96/base 2025-03-17T17:41:36.1364539Z * [new branch] gh/malfet/96/head -> origin/gh/malfet/96/head 2025-03-17T17:41:36.1365216Z * [new branch] gh/malfet/96/orig -> origin/gh/malfet/96/orig 2025-03-17T17:41:36.1367223Z * [new branch] gh/markkm/1/base -> origin/gh/markkm/1/base 2025-03-17T17:41:36.1369355Z * [new branch] gh/masnesral/155/base -> origin/gh/masnesral/155/base 2025-03-17T17:41:36.1370252Z * [new branch] gh/masnesral/155/head -> origin/gh/masnesral/155/head 2025-03-17T17:41:36.1371368Z * [new branch] gh/masnesral/155/orig -> origin/gh/masnesral/155/orig 2025-03-17T17:41:36.1372775Z * [new branch] gh/masnesral/161/base -> origin/gh/masnesral/161/base 2025-03-17T17:41:36.1373785Z * [new branch] gh/masnesral/161/head -> origin/gh/masnesral/161/head 2025-03-17T17:41:36.1374991Z * [new branch] gh/masnesral/161/orig -> origin/gh/masnesral/161/orig 2025-03-17T17:41:36.1376257Z * [new branch] gh/masnesral/162/base -> origin/gh/masnesral/162/base 2025-03-17T17:41:36.1377388Z * [new branch] gh/masnesral/162/head -> origin/gh/masnesral/162/head 2025-03-17T17:41:36.1378427Z * [new branch] gh/masnesral/162/orig -> origin/gh/masnesral/162/orig 2025-03-17T17:41:36.1379940Z * [new branch] gh/masnesral/173/base -> origin/gh/masnesral/173/base 2025-03-17T17:41:36.1381119Z * [new branch] gh/masnesral/173/head -> origin/gh/masnesral/173/head 2025-03-17T17:41:36.1382230Z * [new branch] gh/masnesral/173/orig -> origin/gh/masnesral/173/orig 2025-03-17T17:41:36.1383862Z * [new branch] gh/masnesral/176/base -> origin/gh/masnesral/176/base 2025-03-17T17:41:36.1385189Z * [new branch] gh/masnesral/176/head -> origin/gh/masnesral/176/head 2025-03-17T17:41:36.1386442Z * [new branch] gh/masnesral/176/orig -> origin/gh/masnesral/176/orig 2025-03-17T17:41:36.1388067Z * [new branch] gh/masnesral/177/base -> origin/gh/masnesral/177/base 2025-03-17T17:41:36.1389316Z * [new branch] gh/masnesral/177/head -> origin/gh/masnesral/177/head 2025-03-17T17:41:36.1390382Z * [new branch] gh/masnesral/177/orig -> origin/gh/masnesral/177/orig 2025-03-17T17:41:36.1391680Z * [new branch] gh/masnesral/178/base -> origin/gh/masnesral/178/base 2025-03-17T17:41:36.1392798Z * [new branch] gh/masnesral/178/head -> origin/gh/masnesral/178/head 2025-03-17T17:41:36.1393964Z * [new branch] gh/masnesral/178/orig -> origin/gh/masnesral/178/orig 2025-03-17T17:41:36.1396253Z * [new branch] gh/masnesral/179/base -> origin/gh/masnesral/179/base 2025-03-17T17:41:36.1396872Z * [new branch] gh/masnesral/179/head -> origin/gh/masnesral/179/head 2025-03-17T17:41:36.1397983Z * [new branch] gh/masnesral/179/orig -> origin/gh/masnesral/179/orig 2025-03-17T17:41:36.1399680Z * [new branch] gh/masnesral/180/base -> origin/gh/masnesral/180/base 2025-03-17T17:41:36.1401177Z * [new branch] gh/masnesral/180/head -> origin/gh/masnesral/180/head 2025-03-17T17:41:36.1402508Z * [new branch] gh/masnesral/180/orig -> origin/gh/masnesral/180/orig 2025-03-17T17:41:36.1404162Z * [new branch] gh/masnesral/181/base -> origin/gh/masnesral/181/base 2025-03-17T17:41:36.1405165Z * [new branch] gh/masnesral/181/head -> origin/gh/masnesral/181/head 2025-03-17T17:41:36.1406280Z * [new branch] gh/masnesral/181/orig -> origin/gh/masnesral/181/orig 2025-03-17T17:41:36.1407815Z * [new branch] gh/masnesral/34/base -> origin/gh/masnesral/34/base 2025-03-17T17:41:36.1409523Z * [new branch] gh/mhorowitz/0/base -> origin/gh/mhorowitz/0/base 2025-03-17T17:41:36.1410568Z * [new branch] gh/mhorowitz/0/head -> origin/gh/mhorowitz/0/head 2025-03-17T17:41:36.1411899Z * [new branch] gh/mhorowitz/1/base -> origin/gh/mhorowitz/1/base 2025-03-17T17:41:36.1415247Z * [new branch] gh/mhorowitz/1/head -> origin/gh/mhorowitz/1/head 2025-03-17T17:41:36.1416196Z * [new branch] gh/mhorowitz/2/base -> origin/gh/mhorowitz/2/base 2025-03-17T17:41:36.1416638Z * [new branch] gh/mhorowitz/2/head -> origin/gh/mhorowitz/2/head 2025-03-17T17:41:36.1417082Z * [new branch] gh/mhorowitz/3/base -> origin/gh/mhorowitz/3/base 2025-03-17T17:41:36.1417763Z * [new branch] gh/mhorowitz/3/head -> origin/gh/mhorowitz/3/head 2025-03-17T17:41:36.1418995Z * [new branch] gh/mhorowitz/4/base -> origin/gh/mhorowitz/4/base 2025-03-17T17:41:36.1419938Z * [new branch] gh/mhorowitz/4/head -> origin/gh/mhorowitz/4/head 2025-03-17T17:41:36.1421136Z * [new branch] gh/mhorowitz/5/base -> origin/gh/mhorowitz/5/base 2025-03-17T17:41:36.1421985Z * [new branch] gh/mhorowitz/5/head -> origin/gh/mhorowitz/5/head 2025-03-17T17:41:36.1423189Z * [new branch] gh/mhorowitz/6/base -> origin/gh/mhorowitz/6/base 2025-03-17T17:41:36.1424017Z * [new branch] gh/mhorowitz/6/head -> origin/gh/mhorowitz/6/head 2025-03-17T17:41:36.1425774Z * [new branch] gh/mikaylagawarecki/234/base -> origin/gh/mikaylagawarecki/234/base 2025-03-17T17:41:36.1426777Z * [new branch] gh/mikaylagawarecki/234/head -> origin/gh/mikaylagawarecki/234/head 2025-03-17T17:41:36.1428136Z * [new branch] gh/mikaylagawarecki/235/base -> origin/gh/mikaylagawarecki/235/base 2025-03-17T17:41:36.1428972Z * [new branch] gh/mikaylagawarecki/235/head -> origin/gh/mikaylagawarecki/235/head 2025-03-17T17:41:36.1430287Z * [new branch] gh/mikaylagawarecki/236/base -> origin/gh/mikaylagawarecki/236/base 2025-03-17T17:41:36.1431128Z * [new branch] gh/mikaylagawarecki/236/head -> origin/gh/mikaylagawarecki/236/head 2025-03-17T17:41:36.1432396Z * [new branch] gh/mikaylagawarecki/237/base -> origin/gh/mikaylagawarecki/237/base 2025-03-17T17:41:36.1433203Z * [new branch] gh/mikaylagawarecki/237/head -> origin/gh/mikaylagawarecki/237/head 2025-03-17T17:41:36.1434621Z * [new branch] gh/mikaylagawarecki/238/base -> origin/gh/mikaylagawarecki/238/base 2025-03-17T17:41:36.1435471Z * [new branch] gh/mikaylagawarecki/238/head -> origin/gh/mikaylagawarecki/238/head 2025-03-17T17:41:36.1437262Z * [new branch] gh/mikaylagawarecki/281/base -> origin/gh/mikaylagawarecki/281/base 2025-03-17T17:41:36.1438292Z * [new branch] gh/mikaylagawarecki/281/head -> origin/gh/mikaylagawarecki/281/head 2025-03-17T17:41:36.1439301Z * [new branch] gh/mikaylagawarecki/281/orig -> origin/gh/mikaylagawarecki/281/orig 2025-03-17T17:41:36.1440811Z * [new branch] gh/mikaylagawarecki/299/base -> origin/gh/mikaylagawarecki/299/base 2025-03-17T17:41:36.1441596Z * [new branch] gh/mikaylagawarecki/299/head -> origin/gh/mikaylagawarecki/299/head 2025-03-17T17:41:36.1442589Z * [new branch] gh/mikaylagawarecki/299/orig -> origin/gh/mikaylagawarecki/299/orig 2025-03-17T17:41:36.1443928Z * [new branch] gh/mikaylagawarecki/304/base -> origin/gh/mikaylagawarecki/304/base 2025-03-17T17:41:36.1444832Z * [new branch] gh/mikaylagawarecki/304/head -> origin/gh/mikaylagawarecki/304/head 2025-03-17T17:41:36.1445757Z * [new branch] gh/mikaylagawarecki/304/orig -> origin/gh/mikaylagawarecki/304/orig 2025-03-17T17:41:36.1447219Z * [new branch] gh/mikaylagawarecki/307/base -> origin/gh/mikaylagawarecki/307/base 2025-03-17T17:41:36.1448069Z * [new branch] gh/mikaylagawarecki/307/head -> origin/gh/mikaylagawarecki/307/head 2025-03-17T17:41:36.1449076Z * [new branch] gh/mikaylagawarecki/307/orig -> origin/gh/mikaylagawarecki/307/orig 2025-03-17T17:41:36.1450399Z * [new branch] gh/mikaylagawarecki/310/base -> origin/gh/mikaylagawarecki/310/base 2025-03-17T17:41:36.1451379Z * [new branch] gh/mikaylagawarecki/310/head -> origin/gh/mikaylagawarecki/310/head 2025-03-17T17:41:36.1452321Z * [new branch] gh/mikaylagawarecki/310/orig -> origin/gh/mikaylagawarecki/310/orig 2025-03-17T17:41:36.1453663Z * [new branch] gh/mikaylagawarecki/313/base -> origin/gh/mikaylagawarecki/313/base 2025-03-17T17:41:36.1454658Z * [new branch] gh/mikaylagawarecki/313/head -> origin/gh/mikaylagawarecki/313/head 2025-03-17T17:41:36.1455622Z * [new branch] gh/mikaylagawarecki/313/orig -> origin/gh/mikaylagawarecki/313/orig 2025-03-17T17:41:36.1457002Z * [new branch] gh/mikaylagawarecki/314/base -> origin/gh/mikaylagawarecki/314/base 2025-03-17T17:41:36.1457939Z * [new branch] gh/mikaylagawarecki/314/head -> origin/gh/mikaylagawarecki/314/head 2025-03-17T17:41:36.1458926Z * [new branch] gh/mikaylagawarecki/314/orig -> origin/gh/mikaylagawarecki/314/orig 2025-03-17T17:41:36.1460344Z * [new branch] gh/mikaylagawarecki/315/base -> origin/gh/mikaylagawarecki/315/base 2025-03-17T17:41:36.1461289Z * [new branch] gh/mikaylagawarecki/315/head -> origin/gh/mikaylagawarecki/315/head 2025-03-17T17:41:36.1462313Z * [new branch] gh/mikaylagawarecki/315/orig -> origin/gh/mikaylagawarecki/315/orig 2025-03-17T17:41:36.1463758Z * [new branch] gh/mikaylagawarecki/316/base -> origin/gh/mikaylagawarecki/316/base 2025-03-17T17:41:36.1464677Z * [new branch] gh/mikaylagawarecki/316/head -> origin/gh/mikaylagawarecki/316/head 2025-03-17T17:41:36.1465629Z * [new branch] gh/mikaylagawarecki/316/orig -> origin/gh/mikaylagawarecki/316/orig 2025-03-17T17:41:36.1467112Z * [new branch] gh/mikaylagawarecki/317/base -> origin/gh/mikaylagawarecki/317/base 2025-03-17T17:41:36.1468063Z * [new branch] gh/mikaylagawarecki/317/head -> origin/gh/mikaylagawarecki/317/head 2025-03-17T17:41:36.1469049Z * [new branch] gh/mikaylagawarecki/317/orig -> origin/gh/mikaylagawarecki/317/orig 2025-03-17T17:41:36.1470492Z * [new branch] gh/mikaylagawarecki/318/base -> origin/gh/mikaylagawarecki/318/base 2025-03-17T17:41:36.1471403Z * [new branch] gh/mikaylagawarecki/318/head -> origin/gh/mikaylagawarecki/318/head 2025-03-17T17:41:36.1472412Z * [new branch] gh/mikaylagawarecki/318/orig -> origin/gh/mikaylagawarecki/318/orig 2025-03-17T17:41:36.1474340Z * [new branch] gh/mikaylagawarecki/319/base -> origin/gh/mikaylagawarecki/319/base 2025-03-17T17:41:36.1475230Z * [new branch] gh/mikaylagawarecki/319/head -> origin/gh/mikaylagawarecki/319/head 2025-03-17T17:41:36.1476193Z * [new branch] gh/mikaylagawarecki/319/orig -> origin/gh/mikaylagawarecki/319/orig 2025-03-17T17:41:36.1477784Z * [new branch] gh/mikaylagawarecki/320/base -> origin/gh/mikaylagawarecki/320/base 2025-03-17T17:41:36.1478556Z * [new branch] gh/mikaylagawarecki/320/head -> origin/gh/mikaylagawarecki/320/head 2025-03-17T17:41:36.1479592Z * [new branch] gh/mikaylagawarecki/320/orig -> origin/gh/mikaylagawarecki/320/orig 2025-03-17T17:41:36.1480965Z * [new branch] gh/mikaylagawarecki/321/base -> origin/gh/mikaylagawarecki/321/base 2025-03-17T17:41:36.1481923Z * [new branch] gh/mikaylagawarecki/321/head -> origin/gh/mikaylagawarecki/321/head 2025-03-17T17:41:36.1482951Z * [new branch] gh/mikaylagawarecki/321/orig -> origin/gh/mikaylagawarecki/321/orig 2025-03-17T17:41:36.1484320Z * [new branch] gh/mikaylagawarecki/322/base -> origin/gh/mikaylagawarecki/322/base 2025-03-17T17:41:36.1485221Z * [new branch] gh/mikaylagawarecki/322/head -> origin/gh/mikaylagawarecki/322/head 2025-03-17T17:41:36.1486271Z * [new branch] gh/mikaylagawarecki/322/orig -> origin/gh/mikaylagawarecki/322/orig 2025-03-17T17:41:36.1488109Z * [new branch] gh/mikaylagawarecki/323/base -> origin/gh/mikaylagawarecki/323/base 2025-03-17T17:41:36.1489091Z * [new branch] gh/mikaylagawarecki/323/head -> origin/gh/mikaylagawarecki/323/head 2025-03-17T17:41:36.1490126Z * [new branch] gh/mikaylagawarecki/323/orig -> origin/gh/mikaylagawarecki/323/orig 2025-03-17T17:41:36.1491429Z * [new branch] gh/mikaylagawarecki/324/base -> origin/gh/mikaylagawarecki/324/base 2025-03-17T17:41:36.1492329Z * [new branch] gh/mikaylagawarecki/324/head -> origin/gh/mikaylagawarecki/324/head 2025-03-17T17:41:36.1493305Z * [new branch] gh/mikaylagawarecki/324/orig -> origin/gh/mikaylagawarecki/324/orig 2025-03-17T17:41:36.1494724Z * [new branch] gh/mlazos/1/base -> origin/gh/mlazos/1/base 2025-03-17T17:41:36.1495667Z * [new branch] gh/mlazos/1/head -> origin/gh/mlazos/1/head 2025-03-17T17:41:36.1496891Z * [new branch] gh/mlazos/2/base -> origin/gh/mlazos/2/base 2025-03-17T17:41:36.1497739Z * [new branch] gh/mlazos/2/head -> origin/gh/mlazos/2/head 2025-03-17T17:41:36.1499051Z * [new branch] gh/mlazos/3/base -> origin/gh/mlazos/3/base 2025-03-17T17:41:36.1499984Z * [new branch] gh/mlazos/3/head -> origin/gh/mlazos/3/head 2025-03-17T17:41:36.1500932Z * [new branch] gh/mlazos/3/orig -> origin/gh/mlazos/3/orig 2025-03-17T17:41:36.1502358Z * [new branch] gh/mlazos/4/base -> origin/gh/mlazos/4/base 2025-03-17T17:41:36.1503279Z * [new branch] gh/mlazos/4/head -> origin/gh/mlazos/4/head 2025-03-17T17:41:36.1504229Z * [new branch] gh/mlazos/4/orig -> origin/gh/mlazos/4/orig 2025-03-17T17:41:36.1505731Z * [new branch] gh/mlazos/5/base -> origin/gh/mlazos/5/base 2025-03-17T17:41:36.1506687Z * [new branch] gh/mlazos/5/head -> origin/gh/mlazos/5/head 2025-03-17T17:41:36.1507772Z * [new branch] gh/mlazos/5/orig -> origin/gh/mlazos/5/orig 2025-03-17T17:41:36.1509068Z * [new branch] gh/mlazos/6/base -> origin/gh/mlazos/6/base 2025-03-17T17:41:36.1510043Z * [new branch] gh/mlazos/6/head -> origin/gh/mlazos/6/head 2025-03-17T17:41:36.1510997Z * [new branch] gh/mlazos/6/orig -> origin/gh/mlazos/6/orig 2025-03-17T17:41:36.1512402Z * [new branch] gh/mlazos/7/base -> origin/gh/mlazos/7/base 2025-03-17T17:41:36.1513333Z * [new branch] gh/mlazos/7/head -> origin/gh/mlazos/7/head 2025-03-17T17:41:36.1514358Z * [new branch] gh/mlazos/7/orig -> origin/gh/mlazos/7/orig 2025-03-17T17:41:36.1516044Z * [new branch] gh/mrmiywj/1/base -> origin/gh/mrmiywj/1/base 2025-03-17T17:41:36.1517058Z * [new branch] gh/mrmiywj/1/head -> origin/gh/mrmiywj/1/head 2025-03-17T17:41:36.1518607Z * [new branch] gh/muchulee8/1/base -> origin/gh/muchulee8/1/base 2025-03-17T17:41:36.1519497Z * [new branch] gh/muchulee8/1/orig -> origin/gh/muchulee8/1/orig 2025-03-17T17:41:36.1520878Z * [new branch] gh/muchulee8/2/base -> origin/gh/muchulee8/2/base 2025-03-17T17:41:36.1521806Z * [new branch] gh/muchulee8/2/orig -> origin/gh/muchulee8/2/orig 2025-03-17T17:41:36.1523222Z * [new branch] gh/muchulee8/40/base -> origin/gh/muchulee8/40/base 2025-03-17T17:41:36.1524155Z * [new branch] gh/muchulee8/40/head -> origin/gh/muchulee8/40/head 2025-03-17T17:41:36.1525109Z * [new branch] gh/muchulee8/40/orig -> origin/gh/muchulee8/40/orig 2025-03-17T17:41:36.1526538Z * [new branch] gh/muchulee8/41/base -> origin/gh/muchulee8/41/base 2025-03-17T17:41:36.1527500Z * [new branch] gh/muchulee8/41/head -> origin/gh/muchulee8/41/head 2025-03-17T17:41:36.1528557Z * [new branch] gh/muchulee8/41/orig -> origin/gh/muchulee8/41/orig 2025-03-17T17:41:36.1530142Z * [new branch] gh/muchulee8/42/base -> origin/gh/muchulee8/42/base 2025-03-17T17:41:36.1531099Z * [new branch] gh/muchulee8/42/head -> origin/gh/muchulee8/42/head 2025-03-17T17:41:36.1532086Z * [new branch] gh/muchulee8/42/orig -> origin/gh/muchulee8/42/orig 2025-03-17T17:41:36.1533687Z * [new branch] gh/muchulee8/43/base -> origin/gh/muchulee8/43/base 2025-03-17T17:41:36.1534618Z * [new branch] gh/muchulee8/43/head -> origin/gh/muchulee8/43/head 2025-03-17T17:41:36.1535683Z * [new branch] gh/muchulee8/43/orig -> origin/gh/muchulee8/43/orig 2025-03-17T17:41:36.1537265Z * [new branch] gh/muchulee8/44/base -> origin/gh/muchulee8/44/base 2025-03-17T17:41:36.1538766Z * [new branch] gh/muchulee8/44/head -> origin/gh/muchulee8/44/head 2025-03-17T17:41:36.1539711Z * [new branch] gh/muchulee8/44/orig -> origin/gh/muchulee8/44/orig 2025-03-17T17:41:36.1540984Z * [new branch] gh/muchulee8/45/base -> origin/gh/muchulee8/45/base 2025-03-17T17:41:36.1542389Z * [new branch] gh/muchulee8/45/head -> origin/gh/muchulee8/45/head 2025-03-17T17:41:36.1543410Z * [new branch] gh/muchulee8/45/orig -> origin/gh/muchulee8/45/orig 2025-03-17T17:41:36.1544807Z * [new branch] gh/muchulee8/46/base -> origin/gh/muchulee8/46/base 2025-03-17T17:41:36.1545774Z * [new branch] gh/muchulee8/46/head -> origin/gh/muchulee8/46/head 2025-03-17T17:41:36.1546776Z * [new branch] gh/muchulee8/46/orig -> origin/gh/muchulee8/46/orig 2025-03-17T17:41:36.1548398Z * [new branch] gh/muchulee8/5/base -> origin/gh/muchulee8/5/base 2025-03-17T17:41:36.1549297Z * [new branch] gh/muchulee8/5/orig -> origin/gh/muchulee8/5/orig 2025-03-17T17:41:36.1550874Z * [new branch] gh/mzzchy/2/base -> origin/gh/mzzchy/2/base 2025-03-17T17:41:36.1551841Z * [new branch] gh/mzzchy/2/head -> origin/gh/mzzchy/2/head 2025-03-17T17:41:36.1552849Z * [new branch] gh/mzzchy/2/orig -> origin/gh/mzzchy/2/orig 2025-03-17T17:41:36.1554222Z * [new branch] gh/mzzchy/3/base -> origin/gh/mzzchy/3/base 2025-03-17T17:41:36.1555157Z * [new branch] gh/mzzchy/3/head -> origin/gh/mzzchy/3/head 2025-03-17T17:41:36.1556064Z * [new branch] gh/mzzchy/3/orig -> origin/gh/mzzchy/3/orig 2025-03-17T17:41:36.1557319Z * [new branch] gh/mzzchy/4/base -> origin/gh/mzzchy/4/base 2025-03-17T17:41:36.1558398Z * [new branch] gh/mzzchy/4/head -> origin/gh/mzzchy/4/head 2025-03-17T17:41:36.1560120Z * [new branch] gh/mzzchy/5/base -> origin/gh/mzzchy/5/base 2025-03-17T17:41:36.1561138Z * [new branch] gh/mzzchy/5/head -> origin/gh/mzzchy/5/head 2025-03-17T17:41:36.1562181Z * [new branch] gh/mzzchy/5/orig -> origin/gh/mzzchy/5/orig 2025-03-17T17:41:36.1563919Z * [new branch] gh/nmacchioni/12/base -> origin/gh/nmacchioni/12/base 2025-03-17T17:41:36.1564789Z * [new branch] gh/nmacchioni/12/head -> origin/gh/nmacchioni/12/head 2025-03-17T17:41:36.1565863Z * [new branch] gh/nmacchioni/12/orig -> origin/gh/nmacchioni/12/orig 2025-03-17T17:41:36.1567646Z * [new branch] gh/nmacchioni/31/base -> origin/gh/nmacchioni/31/base 2025-03-17T17:41:36.1568563Z * [new branch] gh/nmacchioni/31/head -> origin/gh/nmacchioni/31/head 2025-03-17T17:41:36.1569524Z * [new branch] gh/nmacchioni/31/orig -> origin/gh/nmacchioni/31/orig 2025-03-17T17:41:36.1570892Z * [new branch] gh/nmacchioni/32/base -> origin/gh/nmacchioni/32/base 2025-03-17T17:41:36.1571766Z * [new branch] gh/nmacchioni/32/head -> origin/gh/nmacchioni/32/head 2025-03-17T17:41:36.1572760Z * [new branch] gh/nmacchioni/32/orig -> origin/gh/nmacchioni/32/orig 2025-03-17T17:41:36.1574100Z * [new branch] gh/nmacchioni/33/base -> origin/gh/nmacchioni/33/base 2025-03-17T17:41:36.1574972Z * [new branch] gh/nmacchioni/33/head -> origin/gh/nmacchioni/33/head 2025-03-17T17:41:36.1575944Z * [new branch] gh/nmacchioni/33/orig -> origin/gh/nmacchioni/33/orig 2025-03-17T17:41:36.1577326Z * [new branch] gh/nmacchioni/35/base -> origin/gh/nmacchioni/35/base 2025-03-17T17:41:36.1578260Z * [new branch] gh/nmacchioni/35/head -> origin/gh/nmacchioni/35/head 2025-03-17T17:41:36.1579215Z * [new branch] gh/nmacchioni/35/orig -> origin/gh/nmacchioni/35/orig 2025-03-17T17:41:36.1580632Z * [new branch] gh/nmacchioni/36/base -> origin/gh/nmacchioni/36/base 2025-03-17T17:41:36.1581717Z * [new branch] gh/nmacchioni/36/head -> origin/gh/nmacchioni/36/head 2025-03-17T17:41:36.1582501Z * [new branch] gh/nmacchioni/36/orig -> origin/gh/nmacchioni/36/orig 2025-03-17T17:41:36.1583759Z * [new branch] gh/nmacchioni/37/base -> origin/gh/nmacchioni/37/base 2025-03-17T17:41:36.1584665Z * [new branch] gh/nmacchioni/37/head -> origin/gh/nmacchioni/37/head 2025-03-17T17:41:36.1585604Z * [new branch] gh/nmacchioni/37/orig -> origin/gh/nmacchioni/37/orig 2025-03-17T17:41:36.1587074Z * [new branch] gh/nmacchioni/39/base -> origin/gh/nmacchioni/39/base 2025-03-17T17:41:36.1588042Z * [new branch] gh/nmacchioni/39/head -> origin/gh/nmacchioni/39/head 2025-03-17T17:41:36.1588952Z * [new branch] gh/nmacchioni/39/orig -> origin/gh/nmacchioni/39/orig 2025-03-17T17:41:36.1590371Z * [new branch] gh/nmacchioni/8/base -> origin/gh/nmacchioni/8/base 2025-03-17T17:41:36.1591222Z * [new branch] gh/nmacchioni/8/head -> origin/gh/nmacchioni/8/head 2025-03-17T17:41:36.1592203Z * [new branch] gh/nmacchioni/8/orig -> origin/gh/nmacchioni/8/orig 2025-03-17T17:41:36.1593841Z * [new branch] gh/oulgen/150/base -> origin/gh/oulgen/150/base 2025-03-17T17:41:36.1594808Z * [new branch] gh/oulgen/150/head -> origin/gh/oulgen/150/head 2025-03-17T17:41:36.1595718Z * [new branch] gh/oulgen/150/orig -> origin/gh/oulgen/150/orig 2025-03-17T17:41:36.1597184Z * [new branch] gh/oulgen/151/base -> origin/gh/oulgen/151/base 2025-03-17T17:41:36.1598265Z * [new branch] gh/oulgen/151/head -> origin/gh/oulgen/151/head 2025-03-17T17:41:36.1599105Z * [new branch] gh/oulgen/151/orig -> origin/gh/oulgen/151/orig 2025-03-17T17:41:36.1600794Z * [new branch] gh/oulgen/152/base -> origin/gh/oulgen/152/base 2025-03-17T17:41:36.1601422Z * [new branch] gh/oulgen/152/head -> origin/gh/oulgen/152/head 2025-03-17T17:41:36.1602298Z * [new branch] gh/oulgen/152/orig -> origin/gh/oulgen/152/orig 2025-03-17T17:41:36.1604117Z * [new branch] gh/oulgen/153/base -> origin/gh/oulgen/153/base 2025-03-17T17:41:36.1605003Z * [new branch] gh/oulgen/153/head -> origin/gh/oulgen/153/head 2025-03-17T17:41:36.1605944Z * [new branch] gh/oulgen/153/orig -> origin/gh/oulgen/153/orig 2025-03-17T17:41:36.1607880Z * [new branch] gh/oulgen/154/base -> origin/gh/oulgen/154/base 2025-03-17T17:41:36.1608841Z * [new branch] gh/oulgen/154/head -> origin/gh/oulgen/154/head 2025-03-17T17:41:36.1609835Z * [new branch] gh/oulgen/154/orig -> origin/gh/oulgen/154/orig 2025-03-17T17:41:36.1611204Z * [new branch] gh/oulgen/155/base -> origin/gh/oulgen/155/base 2025-03-17T17:41:36.1612192Z * [new branch] gh/oulgen/155/head -> origin/gh/oulgen/155/head 2025-03-17T17:41:36.1613192Z * [new branch] gh/oulgen/155/orig -> origin/gh/oulgen/155/orig 2025-03-17T17:41:36.1614487Z * [new branch] gh/oulgen/156/base -> origin/gh/oulgen/156/base 2025-03-17T17:41:36.1615317Z * [new branch] gh/oulgen/156/head -> origin/gh/oulgen/156/head 2025-03-17T17:41:36.1616309Z * [new branch] gh/oulgen/156/orig -> origin/gh/oulgen/156/orig 2025-03-17T17:41:36.1617682Z * [new branch] gh/oulgen/157/base -> origin/gh/oulgen/157/base 2025-03-17T17:41:36.1618555Z * [new branch] gh/oulgen/157/head -> origin/gh/oulgen/157/head 2025-03-17T17:41:36.1619577Z * [new branch] gh/oulgen/157/orig -> origin/gh/oulgen/157/orig 2025-03-17T17:41:36.1620890Z * [new branch] gh/oulgen/158/base -> origin/gh/oulgen/158/base 2025-03-17T17:41:36.1621799Z * [new branch] gh/oulgen/158/head -> origin/gh/oulgen/158/head 2025-03-17T17:41:36.1622763Z * [new branch] gh/oulgen/158/orig -> origin/gh/oulgen/158/orig 2025-03-17T17:41:36.1624188Z * [new branch] gh/oulgen/159/base -> origin/gh/oulgen/159/base 2025-03-17T17:41:36.1625000Z * [new branch] gh/oulgen/159/head -> origin/gh/oulgen/159/head 2025-03-17T17:41:36.1625957Z * [new branch] gh/oulgen/159/orig -> origin/gh/oulgen/159/orig 2025-03-17T17:41:36.1627674Z * [new branch] gh/oulgen/160/base -> origin/gh/oulgen/160/base 2025-03-17T17:41:36.1628637Z * [new branch] gh/oulgen/160/head -> origin/gh/oulgen/160/head 2025-03-17T17:41:36.1629601Z * [new branch] gh/oulgen/160/orig -> origin/gh/oulgen/160/orig 2025-03-17T17:41:36.1630989Z * [new branch] gh/oulgen/161/base -> origin/gh/oulgen/161/base 2025-03-17T17:41:36.1631918Z * [new branch] gh/oulgen/161/head -> origin/gh/oulgen/161/head 2025-03-17T17:41:36.1632900Z * [new branch] gh/oulgen/161/orig -> origin/gh/oulgen/161/orig 2025-03-17T17:41:36.1634325Z * [new branch] gh/oulgen/2/base -> origin/gh/oulgen/2/base 2025-03-17T17:41:36.1635220Z * [new branch] gh/oulgen/2/head -> origin/gh/oulgen/2/head 2025-03-17T17:41:36.1636210Z * [new branch] gh/oulgen/2/orig -> origin/gh/oulgen/2/orig 2025-03-17T17:41:36.1639658Z * [new branch] gh/oulgen/21/base -> origin/gh/oulgen/21/base 2025-03-17T17:41:36.1640845Z * [new branch] gh/oulgen/21/head -> origin/gh/oulgen/21/head 2025-03-17T17:41:36.1641747Z * [new branch] gh/oulgen/21/orig -> origin/gh/oulgen/21/orig 2025-03-17T17:41:36.1643491Z * [new branch] gh/pearu/108/base -> origin/gh/pearu/108/base 2025-03-17T17:41:36.1644504Z * [new branch] gh/pearu/108/head -> origin/gh/pearu/108/head 2025-03-17T17:41:36.1645490Z * [new branch] gh/pearu/108/orig -> origin/gh/pearu/108/orig 2025-03-17T17:41:36.1647717Z * [new branch] gh/pearu/56/base -> origin/gh/pearu/56/base 2025-03-17T17:41:36.1648894Z * [new branch] gh/pearu/56/head -> origin/gh/pearu/56/head 2025-03-17T17:41:36.1649889Z * [new branch] gh/pearu/56/orig -> origin/gh/pearu/56/orig 2025-03-17T17:41:36.1651354Z * [new branch] gh/pearu/97/base -> origin/gh/pearu/97/base 2025-03-17T17:41:36.1652294Z * [new branch] gh/pearu/97/head -> origin/gh/pearu/97/head 2025-03-17T17:41:36.1653242Z * [new branch] gh/pearu/97/orig -> origin/gh/pearu/97/orig 2025-03-17T17:41:36.1654969Z * [new branch] gh/peterbell10/603/base -> origin/gh/peterbell10/603/base 2025-03-17T17:41:36.1655926Z * [new branch] gh/peterbell10/603/head -> origin/gh/peterbell10/603/head 2025-03-17T17:41:36.1656875Z * [new branch] gh/peterbell10/603/orig -> origin/gh/peterbell10/603/orig 2025-03-17T17:41:36.1658375Z * [new branch] gh/peterbell10/635/base -> origin/gh/peterbell10/635/base 2025-03-17T17:41:36.1659373Z * [new branch] gh/peterbell10/635/head -> origin/gh/peterbell10/635/head 2025-03-17T17:41:36.1660851Z * [new branch] gh/peterbell10/635/orig -> origin/gh/peterbell10/635/orig 2025-03-17T17:41:36.1662074Z * [new branch] gh/peterbell10/636/base -> origin/gh/peterbell10/636/base 2025-03-17T17:41:36.1662940Z * [new branch] gh/peterbell10/636/head -> origin/gh/peterbell10/636/head 2025-03-17T17:41:36.1663893Z * [new branch] gh/peterbell10/636/orig -> origin/gh/peterbell10/636/orig 2025-03-17T17:41:36.1665452Z * [new branch] gh/qqaatw/26/base -> origin/gh/qqaatw/26/base 2025-03-17T17:41:36.1666452Z * [new branch] gh/qqaatw/26/head -> origin/gh/qqaatw/26/head 2025-03-17T17:41:36.1667429Z * [new branch] gh/qqaatw/26/orig -> origin/gh/qqaatw/26/orig 2025-03-17T17:41:36.1668920Z * [new branch] gh/raymo/log-graph-breaks -> origin/gh/raymo/log-graph-breaks 2025-03-17T17:41:36.1670342Z * [new branch] gh/rec/115/base -> origin/gh/rec/115/base 2025-03-17T17:41:36.1671258Z * [new branch] gh/rec/115/head -> origin/gh/rec/115/head 2025-03-17T17:41:36.1672239Z * [new branch] gh/rec/115/orig -> origin/gh/rec/115/orig 2025-03-17T17:41:36.1673591Z * [new branch] gh/rec/118/base -> origin/gh/rec/118/base 2025-03-17T17:41:36.1674444Z * [new branch] gh/rec/118/head -> origin/gh/rec/118/head 2025-03-17T17:41:36.1675431Z * [new branch] gh/rec/118/orig -> origin/gh/rec/118/orig 2025-03-17T17:41:36.1677125Z * [new branch] gh/rec/119/base -> origin/gh/rec/119/base 2025-03-17T17:41:36.1678032Z * [new branch] gh/rec/119/head -> origin/gh/rec/119/head 2025-03-17T17:41:36.1679075Z * [new branch] gh/rec/119/orig -> origin/gh/rec/119/orig 2025-03-17T17:41:36.1680476Z * [new branch] gh/rec/120/base -> origin/gh/rec/120/base 2025-03-17T17:41:36.1681419Z * [new branch] gh/rec/120/head -> origin/gh/rec/120/head 2025-03-17T17:41:36.1682450Z * [new branch] gh/rec/120/orig -> origin/gh/rec/120/orig 2025-03-17T17:41:36.1683687Z * [new branch] gh/rec/124/base -> origin/gh/rec/124/base 2025-03-17T17:41:36.1684599Z * [new branch] gh/rec/124/head -> origin/gh/rec/124/head 2025-03-17T17:41:36.1685583Z * [new branch] gh/rec/124/orig -> origin/gh/rec/124/orig 2025-03-17T17:41:36.1686890Z * [new branch] gh/rec/125/base -> origin/gh/rec/125/base 2025-03-17T17:41:36.1687757Z * [new branch] gh/rec/125/head -> origin/gh/rec/125/head 2025-03-17T17:41:36.1688714Z * [new branch] gh/rec/125/orig -> origin/gh/rec/125/orig 2025-03-17T17:41:36.1690124Z * [new branch] gh/rec/128/base -> origin/gh/rec/128/base 2025-03-17T17:41:36.1690979Z * [new branch] gh/rec/128/head -> origin/gh/rec/128/head 2025-03-17T17:41:36.1691915Z * [new branch] gh/rec/128/orig -> origin/gh/rec/128/orig 2025-03-17T17:41:36.1693263Z * [new branch] gh/rec/129/base -> origin/gh/rec/129/base 2025-03-17T17:41:36.1694324Z * [new branch] gh/rec/129/head -> origin/gh/rec/129/head 2025-03-17T17:41:36.1695285Z * [new branch] gh/rec/129/orig -> origin/gh/rec/129/orig 2025-03-17T17:41:36.1696628Z * [new branch] gh/rec/132/base -> origin/gh/rec/132/base 2025-03-17T17:41:36.1697529Z * [new branch] gh/rec/132/head -> origin/gh/rec/132/head 2025-03-17T17:41:36.1698492Z * [new branch] gh/rec/132/orig -> origin/gh/rec/132/orig 2025-03-17T17:41:36.1699819Z * [new branch] gh/rec/133/base -> origin/gh/rec/133/base 2025-03-17T17:41:36.1700729Z * [new branch] gh/rec/133/head -> origin/gh/rec/133/head 2025-03-17T17:41:36.1701633Z * [new branch] gh/rec/133/orig -> origin/gh/rec/133/orig 2025-03-17T17:41:36.1703043Z * [new branch] gh/rec/134/base -> origin/gh/rec/134/base 2025-03-17T17:41:36.1703877Z * [new branch] gh/rec/134/head -> origin/gh/rec/134/head 2025-03-17T17:41:36.1704853Z * [new branch] gh/rec/134/orig -> origin/gh/rec/134/orig 2025-03-17T17:41:36.1706196Z * [new branch] gh/rec/135/base -> origin/gh/rec/135/base 2025-03-17T17:41:36.1707207Z * [new branch] gh/rec/135/head -> origin/gh/rec/135/head 2025-03-17T17:41:36.1708124Z * [new branch] gh/rec/135/orig -> origin/gh/rec/135/orig 2025-03-17T17:41:36.1709485Z * [new branch] gh/rec/136/base -> origin/gh/rec/136/base 2025-03-17T17:41:36.1710402Z * [new branch] gh/rec/136/head -> origin/gh/rec/136/head 2025-03-17T17:41:36.1711359Z * [new branch] gh/rec/136/orig -> origin/gh/rec/136/orig 2025-03-17T17:41:36.1712708Z * [new branch] gh/rec/137/base -> origin/gh/rec/137/base 2025-03-17T17:41:36.1713586Z * [new branch] gh/rec/137/head -> origin/gh/rec/137/head 2025-03-17T17:41:36.1714553Z * [new branch] gh/rec/137/orig -> origin/gh/rec/137/orig 2025-03-17T17:41:36.1715875Z * [new branch] gh/rec/138/base -> origin/gh/rec/138/base 2025-03-17T17:41:36.1716749Z * [new branch] gh/rec/138/head -> origin/gh/rec/138/head 2025-03-17T17:41:36.1717735Z * [new branch] gh/rec/138/orig -> origin/gh/rec/138/orig 2025-03-17T17:41:36.1719081Z * [new branch] gh/rec/139/base -> origin/gh/rec/139/base 2025-03-17T17:41:36.1719932Z * [new branch] gh/rec/139/head -> origin/gh/rec/139/head 2025-03-17T17:41:36.1720918Z * [new branch] gh/rec/139/orig -> origin/gh/rec/139/orig 2025-03-17T17:41:36.1722314Z * [new branch] gh/rec/27/base -> origin/gh/rec/27/base 2025-03-17T17:41:36.1723092Z * [new branch] gh/rec/27/head -> origin/gh/rec/27/head 2025-03-17T17:41:36.1724129Z * [new branch] gh/rec/27/orig -> origin/gh/rec/27/orig 2025-03-17T17:41:36.1725821Z * [new branch] gh/rohan-varma/742/base -> origin/gh/rohan-varma/742/base 2025-03-17T17:41:36.1726796Z * [new branch] gh/rohan-varma/742/head -> origin/gh/rohan-varma/742/head 2025-03-17T17:41:36.1727852Z * [new branch] gh/rohan-varma/742/orig -> origin/gh/rohan-varma/742/orig 2025-03-17T17:41:36.1729430Z * [new branch] gh/seemethere/10/base -> origin/gh/seemethere/10/base 2025-03-17T17:41:36.1730433Z * [new branch] gh/seemethere/10/head -> origin/gh/seemethere/10/head 2025-03-17T17:41:36.1731266Z * [new branch] gh/seemethere/10/orig -> origin/gh/seemethere/10/orig 2025-03-17T17:41:36.1732537Z * [new branch] gh/seemethere/11/base -> origin/gh/seemethere/11/base 2025-03-17T17:41:36.1733462Z * [new branch] gh/seemethere/11/head -> origin/gh/seemethere/11/head 2025-03-17T17:41:36.1734427Z * [new branch] gh/seemethere/11/orig -> origin/gh/seemethere/11/orig 2025-03-17T17:41:36.1735777Z * [new branch] gh/seemethere/12/base -> origin/gh/seemethere/12/base 2025-03-17T17:41:36.1736628Z * [new branch] gh/seemethere/12/head -> origin/gh/seemethere/12/head 2025-03-17T17:41:36.1737805Z * [new branch] gh/seemethere/12/orig -> origin/gh/seemethere/12/orig 2025-03-17T17:41:36.1739170Z * [new branch] gh/seemethere/13/base -> origin/gh/seemethere/13/base 2025-03-17T17:41:36.1740034Z * [new branch] gh/seemethere/13/head -> origin/gh/seemethere/13/head 2025-03-17T17:41:36.1741068Z * [new branch] gh/seemethere/13/orig -> origin/gh/seemethere/13/orig 2025-03-17T17:41:36.1742498Z * [new branch] gh/seemethere/14/base -> origin/gh/seemethere/14/base 2025-03-17T17:41:36.1743369Z * [new branch] gh/seemethere/14/head -> origin/gh/seemethere/14/head 2025-03-17T17:41:36.1744378Z * [new branch] gh/seemethere/14/orig -> origin/gh/seemethere/14/orig 2025-03-17T17:41:36.1745738Z * [new branch] gh/seemethere/15/base -> origin/gh/seemethere/15/base 2025-03-17T17:41:36.1746683Z * [new branch] gh/seemethere/15/head -> origin/gh/seemethere/15/head 2025-03-17T17:41:36.1747727Z * [new branch] gh/seemethere/15/orig -> origin/gh/seemethere/15/orig 2025-03-17T17:41:36.1749056Z * [new branch] gh/seemethere/16/base -> origin/gh/seemethere/16/base 2025-03-17T17:41:36.1749990Z * [new branch] gh/seemethere/16/head -> origin/gh/seemethere/16/head 2025-03-17T17:41:36.1751033Z * [new branch] gh/seemethere/16/orig -> origin/gh/seemethere/16/orig 2025-03-17T17:41:36.1752368Z * [new branch] gh/seemethere/17/base -> origin/gh/seemethere/17/base 2025-03-17T17:41:36.1753285Z * [new branch] gh/seemethere/17/head -> origin/gh/seemethere/17/head 2025-03-17T17:41:36.1754478Z * [new branch] gh/seemethere/17/orig -> origin/gh/seemethere/17/orig 2025-03-17T17:41:36.1756157Z * [new branch] gh/seemethere/18/base -> origin/gh/seemethere/18/base 2025-03-17T17:41:36.1757086Z * [new branch] gh/seemethere/18/head -> origin/gh/seemethere/18/head 2025-03-17T17:41:36.1758061Z * [new branch] gh/seemethere/18/orig -> origin/gh/seemethere/18/orig 2025-03-17T17:41:36.1759434Z * [new branch] gh/seemethere/19/base -> origin/gh/seemethere/19/base 2025-03-17T17:41:36.1760318Z * [new branch] gh/seemethere/19/head -> origin/gh/seemethere/19/head 2025-03-17T17:41:36.1761537Z * [new branch] gh/seemethere/19/orig -> origin/gh/seemethere/19/orig 2025-03-17T17:41:36.1762691Z * [new branch] gh/seemethere/20/base -> origin/gh/seemethere/20/base 2025-03-17T17:41:36.1763649Z * [new branch] gh/seemethere/20/head -> origin/gh/seemethere/20/head 2025-03-17T17:41:36.1764605Z * [new branch] gh/seemethere/20/orig -> origin/gh/seemethere/20/orig 2025-03-17T17:41:36.1765939Z * [new branch] gh/seemethere/7/base -> origin/gh/seemethere/7/base 2025-03-17T17:41:36.1766833Z * [new branch] gh/seemethere/7/head -> origin/gh/seemethere/7/head 2025-03-17T17:41:36.1767825Z * [new branch] gh/seemethere/7/orig -> origin/gh/seemethere/7/orig 2025-03-17T17:41:36.1769187Z * [new branch] gh/seemethere/8/base -> origin/gh/seemethere/8/base 2025-03-17T17:41:36.1770066Z * [new branch] gh/seemethere/8/head -> origin/gh/seemethere/8/head 2025-03-17T17:41:36.1771049Z * [new branch] gh/seemethere/8/orig -> origin/gh/seemethere/8/orig 2025-03-17T17:41:36.1772357Z * [new branch] gh/seemethere/9/base -> origin/gh/seemethere/9/base 2025-03-17T17:41:36.1773257Z * [new branch] gh/seemethere/9/head -> origin/gh/seemethere/9/head 2025-03-17T17:41:36.1774302Z * [new branch] gh/seemethere/9/orig -> origin/gh/seemethere/9/orig 2025-03-17T17:41:36.1776151Z * [new branch] gh/shunting314/145/base -> origin/gh/shunting314/145/base 2025-03-17T17:41:36.1777156Z * [new branch] gh/shunting314/145/head -> origin/gh/shunting314/145/head 2025-03-17T17:41:36.1778186Z * [new branch] gh/shunting314/145/orig -> origin/gh/shunting314/145/orig 2025-03-17T17:41:36.1779800Z * [new branch] gh/shunting314/151/base -> origin/gh/shunting314/151/base 2025-03-17T17:41:36.1780681Z * [new branch] gh/shunting314/151/head -> origin/gh/shunting314/151/head 2025-03-17T17:41:36.1781676Z * [new branch] gh/shunting314/151/orig -> origin/gh/shunting314/151/orig 2025-03-17T17:41:36.1783115Z * [new branch] gh/shunting314/176/base -> origin/gh/shunting314/176/base 2025-03-17T17:41:36.1784063Z * [new branch] gh/shunting314/176/head -> origin/gh/shunting314/176/head 2025-03-17T17:41:36.1785021Z * [new branch] gh/shunting314/176/orig -> origin/gh/shunting314/176/orig 2025-03-17T17:41:36.1786609Z * [new branch] gh/shunting314/199/base -> origin/gh/shunting314/199/base 2025-03-17T17:41:36.1787681Z * [new branch] gh/shunting314/199/head -> origin/gh/shunting314/199/head 2025-03-17T17:41:36.1788628Z * [new branch] gh/shunting314/199/orig -> origin/gh/shunting314/199/orig 2025-03-17T17:41:36.1790071Z * [new branch] gh/shunting314/200/base -> origin/gh/shunting314/200/base 2025-03-17T17:41:36.1790932Z * [new branch] gh/shunting314/200/head -> origin/gh/shunting314/200/head 2025-03-17T17:41:36.1792197Z * [new branch] gh/shunting314/201/base -> origin/gh/shunting314/201/base 2025-03-17T17:41:36.1793139Z * [new branch] gh/shunting314/201/head -> origin/gh/shunting314/201/head 2025-03-17T17:41:36.1794196Z * [new branch] gh/shunting314/201/orig -> origin/gh/shunting314/201/orig 2025-03-17T17:41:36.1795582Z * [new branch] gh/sijiac/1/base -> origin/gh/sijiac/1/base 2025-03-17T17:41:36.1796489Z * [new branch] gh/sijiac/1/head -> origin/gh/sijiac/1/head 2025-03-17T17:41:36.1797668Z * [new branch] gh/sijiac/2/base -> origin/gh/sijiac/2/base 2025-03-17T17:41:36.1798633Z * [new branch] gh/sijiac/2/head -> origin/gh/sijiac/2/head 2025-03-17T17:41:36.1799948Z * [new branch] gh/sijiac/3/base -> origin/gh/sijiac/3/base 2025-03-17T17:41:36.1800669Z * [new branch] gh/sijiac/3/head -> origin/gh/sijiac/3/head 2025-03-17T17:41:36.1802428Z * [new branch] gh/silverguo/1/base -> origin/gh/silverguo/1/base 2025-03-17T17:41:36.1803337Z * [new branch] gh/silverguo/1/head -> origin/gh/silverguo/1/head 2025-03-17T17:41:36.1804565Z * [new branch] gh/silverguo/2/base -> origin/gh/silverguo/2/base 2025-03-17T17:41:36.1805425Z * [new branch] gh/silverguo/2/head -> origin/gh/silverguo/2/head 2025-03-17T17:41:36.1806658Z * [new branch] gh/silverguo/3/base -> origin/gh/silverguo/3/base 2025-03-17T17:41:36.1807564Z * [new branch] gh/silverguo/3/head -> origin/gh/silverguo/3/head 2025-03-17T17:41:36.1808807Z * [new branch] gh/silverguo/4/base -> origin/gh/silverguo/4/base 2025-03-17T17:41:36.1809715Z * [new branch] gh/silverguo/4/head -> origin/gh/silverguo/4/head 2025-03-17T17:41:36.1811297Z * [new branch] gh/sinhaanhsul/1/base -> origin/gh/sinhaanhsul/1/base 2025-03-17T17:41:36.1812242Z * [new branch] gh/sinhaanhsul/1/head -> origin/gh/sinhaanhsul/1/head 2025-03-17T17:41:36.1813924Z * [new branch] gh/soulitzer/269/base -> origin/gh/soulitzer/269/base 2025-03-17T17:41:36.1814820Z * [new branch] gh/soulitzer/269/head -> origin/gh/soulitzer/269/head 2025-03-17T17:41:36.1815782Z * [new branch] gh/soulitzer/269/orig -> origin/gh/soulitzer/269/orig 2025-03-17T17:41:36.1817288Z * [new branch] gh/soulitzer/276/base -> origin/gh/soulitzer/276/base 2025-03-17T17:41:36.1818162Z * [new branch] gh/soulitzer/276/head -> origin/gh/soulitzer/276/head 2025-03-17T17:41:36.1819147Z * [new branch] gh/soulitzer/276/orig -> origin/gh/soulitzer/276/orig 2025-03-17T17:41:36.1820750Z * [new branch] gh/soulitzer/287/base -> origin/gh/soulitzer/287/base 2025-03-17T17:41:36.1821649Z * [new branch] gh/soulitzer/287/head -> origin/gh/soulitzer/287/head 2025-03-17T17:41:36.1822663Z * [new branch] gh/soulitzer/287/orig -> origin/gh/soulitzer/287/orig 2025-03-17T17:41:36.1824075Z * [new branch] gh/soulitzer/296/base -> origin/gh/soulitzer/296/base 2025-03-17T17:41:36.1825029Z * [new branch] gh/soulitzer/296/head -> origin/gh/soulitzer/296/head 2025-03-17T17:41:36.1825980Z * [new branch] gh/soulitzer/296/orig -> origin/gh/soulitzer/296/orig 2025-03-17T17:41:36.1827592Z * [new branch] gh/soulitzer/299/base -> origin/gh/soulitzer/299/base 2025-03-17T17:41:36.1828519Z * [new branch] gh/soulitzer/299/head -> origin/gh/soulitzer/299/head 2025-03-17T17:41:36.1829546Z * [new branch] gh/soulitzer/299/orig -> origin/gh/soulitzer/299/orig 2025-03-17T17:41:36.1830902Z * [new branch] gh/soulitzer/300/base -> origin/gh/soulitzer/300/base 2025-03-17T17:41:36.1831846Z * [new branch] gh/soulitzer/300/head -> origin/gh/soulitzer/300/head 2025-03-17T17:41:36.1832783Z * [new branch] gh/soulitzer/300/orig -> origin/gh/soulitzer/300/orig 2025-03-17T17:41:36.1834295Z * [new branch] gh/soulitzer/301/base -> origin/gh/soulitzer/301/base 2025-03-17T17:41:36.1835298Z * [new branch] gh/soulitzer/301/head -> origin/gh/soulitzer/301/head 2025-03-17T17:41:36.1836283Z * [new branch] gh/soulitzer/301/orig -> origin/gh/soulitzer/301/orig 2025-03-17T17:41:36.1837911Z * [new branch] gh/soulitzer/313/base -> origin/gh/soulitzer/313/base 2025-03-17T17:41:36.1838796Z * [new branch] gh/soulitzer/313/head -> origin/gh/soulitzer/313/head 2025-03-17T17:41:36.1839961Z * [new branch] gh/soulitzer/313/orig -> origin/gh/soulitzer/313/orig 2025-03-17T17:41:36.1841167Z * [new branch] gh/soulitzer/319/base -> origin/gh/soulitzer/319/base 2025-03-17T17:41:36.1842072Z * [new branch] gh/soulitzer/319/head -> origin/gh/soulitzer/319/head 2025-03-17T17:41:36.1842999Z * [new branch] gh/soulitzer/319/orig -> origin/gh/soulitzer/319/orig 2025-03-17T17:41:36.1844482Z * [new branch] gh/soulitzer/320/base -> origin/gh/soulitzer/320/base 2025-03-17T17:41:36.1845352Z * [new branch] gh/soulitzer/320/head -> origin/gh/soulitzer/320/head 2025-03-17T17:41:36.1846249Z * [new branch] gh/soulitzer/320/orig -> origin/gh/soulitzer/320/orig 2025-03-17T17:41:36.1847661Z * [new branch] gh/soulitzer/329/base -> origin/gh/soulitzer/329/base 2025-03-17T17:41:36.1848595Z * [new branch] gh/soulitzer/329/head -> origin/gh/soulitzer/329/head 2025-03-17T17:41:36.1849559Z * [new branch] gh/soulitzer/329/orig -> origin/gh/soulitzer/329/orig 2025-03-17T17:41:36.1850895Z * [new branch] gh/soulitzer/331/base -> origin/gh/soulitzer/331/base 2025-03-17T17:41:36.1851860Z * [new branch] gh/soulitzer/331/head -> origin/gh/soulitzer/331/head 2025-03-17T17:41:36.1852772Z * [new branch] gh/soulitzer/331/orig -> origin/gh/soulitzer/331/orig 2025-03-17T17:41:36.1854350Z * [new branch] gh/soulitzer/332/base -> origin/gh/soulitzer/332/base 2025-03-17T17:41:36.1855164Z * [new branch] gh/soulitzer/332/head -> origin/gh/soulitzer/332/head 2025-03-17T17:41:36.1856134Z * [new branch] gh/soulitzer/332/orig -> origin/gh/soulitzer/332/orig 2025-03-17T17:41:36.1857496Z * [new branch] gh/soulitzer/335/base -> origin/gh/soulitzer/335/base 2025-03-17T17:41:36.1858369Z * [new branch] gh/soulitzer/335/head -> origin/gh/soulitzer/335/head 2025-03-17T17:41:36.1859444Z * [new branch] gh/soulitzer/335/orig -> origin/gh/soulitzer/335/orig 2025-03-17T17:41:36.1860925Z * [new branch] gh/soulitzer/336/base -> origin/gh/soulitzer/336/base 2025-03-17T17:41:36.1861753Z * [new branch] gh/soulitzer/336/head -> origin/gh/soulitzer/336/head 2025-03-17T17:41:36.1862662Z * [new branch] gh/soulitzer/336/orig -> origin/gh/soulitzer/336/orig 2025-03-17T17:41:36.1864106Z * [new branch] gh/soulitzer/347/base -> origin/gh/soulitzer/347/base 2025-03-17T17:41:36.1865015Z * [new branch] gh/soulitzer/347/head -> origin/gh/soulitzer/347/head 2025-03-17T17:41:36.1865904Z * [new branch] gh/soulitzer/347/orig -> origin/gh/soulitzer/347/orig 2025-03-17T17:41:36.1868090Z * [new branch] gh/soulitzer/349/base -> origin/gh/soulitzer/349/base 2025-03-17T17:41:36.1886317Z * [new branch] gh/soulitzer/349/head -> origin/gh/soulitzer/349/head 2025-03-17T17:41:36.1887190Z * [new branch] gh/soulitzer/349/orig -> origin/gh/soulitzer/349/orig 2025-03-17T17:41:36.1887645Z * [new branch] gh/soulitzer/350/base -> origin/gh/soulitzer/350/base 2025-03-17T17:41:36.1888110Z * [new branch] gh/soulitzer/350/head -> origin/gh/soulitzer/350/head 2025-03-17T17:41:36.1888489Z * [new branch] gh/soulitzer/350/orig -> origin/gh/soulitzer/350/orig 2025-03-17T17:41:36.1888726Z * [new branch] gh/soulitzer/351/base -> origin/gh/soulitzer/351/base 2025-03-17T17:41:36.1888973Z * [new branch] gh/soulitzer/351/head -> origin/gh/soulitzer/351/head 2025-03-17T17:41:36.1889375Z * [new branch] gh/soulitzer/351/orig -> origin/gh/soulitzer/351/orig 2025-03-17T17:41:36.1889846Z * [new branch] gh/soulitzer/353/base -> origin/gh/soulitzer/353/base 2025-03-17T17:41:36.1890390Z * [new branch] gh/soulitzer/353/head -> origin/gh/soulitzer/353/head 2025-03-17T17:41:36.1890813Z * [new branch] gh/soulitzer/353/orig -> origin/gh/soulitzer/353/orig 2025-03-17T17:41:36.1891141Z * [new branch] gh/soulitzer/354/base -> origin/gh/soulitzer/354/base 2025-03-17T17:41:36.1891629Z * [new branch] gh/soulitzer/354/head -> origin/gh/soulitzer/354/head 2025-03-17T17:41:36.1891921Z * [new branch] gh/soulitzer/354/orig -> origin/gh/soulitzer/354/orig 2025-03-17T17:41:36.1892143Z * [new branch] gh/suo/619/base -> origin/gh/suo/619/base 2025-03-17T17:41:36.1892561Z * [new branch] gh/swolchok/704/base -> origin/gh/swolchok/704/base 2025-03-17T17:41:36.1893012Z * [new branch] gh/swolchok/704/orig -> origin/gh/swolchok/704/orig 2025-03-17T17:41:36.1893316Z * [new branch] gh/swolchok/722/base -> origin/gh/swolchok/722/base 2025-03-17T17:41:36.1893807Z * [new branch] gh/swolchok/722/head -> origin/gh/swolchok/722/head 2025-03-17T17:41:36.1894126Z * [new branch] gh/swolchok/722/orig -> origin/gh/swolchok/722/orig 2025-03-17T17:41:36.1894576Z * [new branch] gh/swolchok/723/base -> origin/gh/swolchok/723/base 2025-03-17T17:41:36.1894968Z * [new branch] gh/swolchok/723/head -> origin/gh/swolchok/723/head 2025-03-17T17:41:36.1895210Z * [new branch] gh/swolchok/723/orig -> origin/gh/swolchok/723/orig 2025-03-17T17:41:36.1896599Z * [new branch] gh/syed-ahmed/1/base -> origin/gh/syed-ahmed/1/base 2025-03-17T17:41:36.1897488Z * [new branch] gh/syed-ahmed/1/head -> origin/gh/syed-ahmed/1/head 2025-03-17T17:41:36.1898494Z * [new branch] gh/syed-ahmed/1/orig -> origin/gh/syed-ahmed/1/orig 2025-03-17T17:41:36.1899755Z * [new branch] gh/syed-ahmed/2/base -> origin/gh/syed-ahmed/2/base 2025-03-17T17:41:36.1900700Z * [new branch] gh/syed-ahmed/2/head -> origin/gh/syed-ahmed/2/head 2025-03-17T17:41:36.1901657Z * [new branch] gh/syed-ahmed/2/orig -> origin/gh/syed-ahmed/2/orig 2025-03-17T17:41:36.1903831Z * [new branch] gh/tianyu-l/2/base -> origin/gh/tianyu-l/2/base 2025-03-17T17:41:36.1904841Z * [new branch] gh/tianyu-l/2/head -> origin/gh/tianyu-l/2/head 2025-03-17T17:41:36.1905889Z * [new branch] gh/tianyu-l/2/orig -> origin/gh/tianyu-l/2/orig 2025-03-17T17:41:36.1907375Z * [new branch] gh/tianyu-l/6/base -> origin/gh/tianyu-l/6/base 2025-03-17T17:41:36.1908283Z * [new branch] gh/tianyu-l/6/head -> origin/gh/tianyu-l/6/head 2025-03-17T17:41:36.1909252Z * [new branch] gh/tianyu-l/6/orig -> origin/gh/tianyu-l/6/orig 2025-03-17T17:41:36.1910596Z * [new branch] gh/tianyu-l/7/base -> origin/gh/tianyu-l/7/base 2025-03-17T17:41:36.1911507Z * [new branch] gh/tianyu-l/7/head -> origin/gh/tianyu-l/7/head 2025-03-17T17:41:36.1912436Z * [new branch] gh/tianyu-l/7/orig -> origin/gh/tianyu-l/7/orig 2025-03-17T17:41:36.1914215Z * [new branch] gh/tugsbayasgalan/155/base -> origin/gh/tugsbayasgalan/155/base 2025-03-17T17:41:36.1915110Z * [new branch] gh/tugsbayasgalan/155/head -> origin/gh/tugsbayasgalan/155/head 2025-03-17T17:41:36.1916135Z * [new branch] gh/tugsbayasgalan/155/orig -> origin/gh/tugsbayasgalan/155/orig 2025-03-17T17:41:36.1917524Z * [new branch] gh/tugsbayasgalan/162/base -> origin/gh/tugsbayasgalan/162/base 2025-03-17T17:41:36.1918374Z * [new branch] gh/tugsbayasgalan/162/head -> origin/gh/tugsbayasgalan/162/head 2025-03-17T17:41:36.1919331Z * [new branch] gh/tugsbayasgalan/162/orig -> origin/gh/tugsbayasgalan/162/orig 2025-03-17T17:41:36.1920841Z * [new branch] gh/tugsbayasgalan/277/base -> origin/gh/tugsbayasgalan/277/base 2025-03-17T17:41:36.1921672Z * [new branch] gh/tugsbayasgalan/277/head -> origin/gh/tugsbayasgalan/277/head 2025-03-17T17:41:36.1922653Z * [new branch] gh/tugsbayasgalan/277/orig -> origin/gh/tugsbayasgalan/277/orig 2025-03-17T17:41:36.1924312Z * [new branch] gh/tugsbayasgalan/282/base -> origin/gh/tugsbayasgalan/282/base 2025-03-17T17:41:36.1925346Z * [new branch] gh/tugsbayasgalan/282/head -> origin/gh/tugsbayasgalan/282/head 2025-03-17T17:41:36.1926377Z * [new branch] gh/tugsbayasgalan/282/orig -> origin/gh/tugsbayasgalan/282/orig 2025-03-17T17:41:36.1927759Z * [new branch] gh/tugsbayasgalan/290/base -> origin/gh/tugsbayasgalan/290/base 2025-03-17T17:41:36.1928686Z * [new branch] gh/tugsbayasgalan/290/head -> origin/gh/tugsbayasgalan/290/head 2025-03-17T17:41:36.1929665Z * [new branch] gh/tugsbayasgalan/290/orig -> origin/gh/tugsbayasgalan/290/orig 2025-03-17T17:41:36.1931071Z * [new branch] gh/tugsbayasgalan/291/base -> origin/gh/tugsbayasgalan/291/base 2025-03-17T17:41:36.1932110Z * [new branch] gh/tugsbayasgalan/291/head -> origin/gh/tugsbayasgalan/291/head 2025-03-17T17:41:36.1933070Z * [new branch] gh/tugsbayasgalan/291/orig -> origin/gh/tugsbayasgalan/291/orig 2025-03-17T17:41:36.1934521Z * [new branch] gh/tugsbayasgalan/292/base -> origin/gh/tugsbayasgalan/292/base 2025-03-17T17:41:36.1935379Z * [new branch] gh/tugsbayasgalan/292/head -> origin/gh/tugsbayasgalan/292/head 2025-03-17T17:41:36.1936415Z * [new branch] gh/tugsbayasgalan/292/orig -> origin/gh/tugsbayasgalan/292/orig 2025-03-17T17:41:36.1938069Z * [new branch] gh/tugsbayasgalan/293/base -> origin/gh/tugsbayasgalan/293/base 2025-03-17T17:41:36.1939028Z * [new branch] gh/tugsbayasgalan/293/head -> origin/gh/tugsbayasgalan/293/head 2025-03-17T17:41:36.1939967Z * [new branch] gh/tugsbayasgalan/293/orig -> origin/gh/tugsbayasgalan/293/orig 2025-03-17T17:41:36.1941461Z * [new branch] gh/tugsbayasgalan/294/base -> origin/gh/tugsbayasgalan/294/base 2025-03-17T17:41:36.1942395Z * [new branch] gh/tugsbayasgalan/294/head -> origin/gh/tugsbayasgalan/294/head 2025-03-17T17:41:36.1943292Z * [new branch] gh/tugsbayasgalan/294/orig -> origin/gh/tugsbayasgalan/294/orig 2025-03-17T17:41:36.1944730Z * [new branch] gh/tugsbayasgalan/295/base -> origin/gh/tugsbayasgalan/295/base 2025-03-17T17:41:36.1945672Z * [new branch] gh/tugsbayasgalan/295/head -> origin/gh/tugsbayasgalan/295/head 2025-03-17T17:41:36.1946846Z * [new branch] gh/tugsbayasgalan/295/orig -> origin/gh/tugsbayasgalan/295/orig 2025-03-17T17:41:36.1948375Z * [new branch] gh/tugsbayasgalan/296/base -> origin/gh/tugsbayasgalan/296/base 2025-03-17T17:41:36.1949867Z * [new branch] gh/tugsbayasgalan/296/head -> origin/gh/tugsbayasgalan/296/head 2025-03-17T17:41:36.1950805Z * [new branch] gh/tugsbayasgalan/296/orig -> origin/gh/tugsbayasgalan/296/orig 2025-03-17T17:41:36.1952345Z * [new branch] gh/tugsbayasgalan/297/base -> origin/gh/tugsbayasgalan/297/base 2025-03-17T17:41:36.1953180Z * [new branch] gh/tugsbayasgalan/297/head -> origin/gh/tugsbayasgalan/297/head 2025-03-17T17:41:36.1954142Z * [new branch] gh/tugsbayasgalan/297/orig -> origin/gh/tugsbayasgalan/297/orig 2025-03-17T17:41:36.1955578Z * [new branch] gh/tugsbayasgalan/298/base -> origin/gh/tugsbayasgalan/298/base 2025-03-17T17:41:36.1956574Z * [new branch] gh/tugsbayasgalan/298/head -> origin/gh/tugsbayasgalan/298/head 2025-03-17T17:41:36.1957597Z * [new branch] gh/tugsbayasgalan/298/orig -> origin/gh/tugsbayasgalan/298/orig 2025-03-17T17:41:36.1959045Z * [new branch] gh/tugsbayasgalan/299/base -> origin/gh/tugsbayasgalan/299/base 2025-03-17T17:41:36.1959856Z * [new branch] gh/tugsbayasgalan/299/head -> origin/gh/tugsbayasgalan/299/head 2025-03-17T17:41:36.1960859Z * [new branch] gh/tugsbayasgalan/299/orig -> origin/gh/tugsbayasgalan/299/orig 2025-03-17T17:41:36.1962124Z * [new branch] gh/tugsbayasgalan/300/base -> origin/gh/tugsbayasgalan/300/base 2025-03-17T17:41:36.1963061Z * [new branch] gh/tugsbayasgalan/300/head -> origin/gh/tugsbayasgalan/300/head 2025-03-17T17:41:36.1964079Z * [new branch] gh/tugsbayasgalan/300/orig -> origin/gh/tugsbayasgalan/300/orig 2025-03-17T17:41:36.1965598Z * [new branch] gh/vkuzo/1/head -> origin/gh/vkuzo/1/head 2025-03-17T17:41:36.1966347Z * [new branch] gh/vkuzo/1/next -> origin/gh/vkuzo/1/next 2025-03-17T17:41:36.1967375Z * [new branch] gh/vkuzo/1/orig -> origin/gh/vkuzo/1/orig 2025-03-17T17:41:36.1969121Z * [new branch] gh/vkuzo/2/head -> origin/gh/vkuzo/2/head 2025-03-17T17:41:36.1969936Z * [new branch] gh/vkuzo/2/next -> origin/gh/vkuzo/2/next 2025-03-17T17:41:36.1970883Z * [new branch] gh/vkuzo/2/orig -> origin/gh/vkuzo/2/orig 2025-03-17T17:41:36.1972305Z * [new branch] gh/vkuzo/3/head -> origin/gh/vkuzo/3/head 2025-03-17T17:41:36.1973117Z * [new branch] gh/vkuzo/3/next -> origin/gh/vkuzo/3/next 2025-03-17T17:41:36.1974032Z * [new branch] gh/vkuzo/3/orig -> origin/gh/vkuzo/3/orig 2025-03-17T17:41:36.1975377Z * [new branch] gh/vkuzo/4/base -> origin/gh/vkuzo/4/base 2025-03-17T17:41:36.1976268Z * [new branch] gh/vkuzo/4/head -> origin/gh/vkuzo/4/head 2025-03-17T17:41:36.1977265Z * [new branch] gh/vkuzo/4/orig -> origin/gh/vkuzo/4/orig 2025-03-17T17:41:36.1978801Z * [new branch] gh/vkuzo/5/base -> origin/gh/vkuzo/5/base 2025-03-17T17:41:36.1979878Z * [new branch] gh/vkuzo/5/head -> origin/gh/vkuzo/5/head 2025-03-17T17:41:36.1980849Z * [new branch] gh/vkuzo/5/orig -> origin/gh/vkuzo/5/orig 2025-03-17T17:41:36.1982238Z * [new branch] gh/vkuzo/6/base -> origin/gh/vkuzo/6/base 2025-03-17T17:41:36.1983599Z * [new branch] gh/vkuzo/6/head -> origin/gh/vkuzo/6/head 2025-03-17T17:41:36.1984485Z * [new branch] gh/vkuzo/6/orig -> origin/gh/vkuzo/6/orig 2025-03-17T17:41:36.1985842Z * [new branch] gh/vkuzo/7/base -> origin/gh/vkuzo/7/base 2025-03-17T17:41:36.1986738Z * [new branch] gh/vkuzo/7/head -> origin/gh/vkuzo/7/head 2025-03-17T17:41:36.1987798Z * [new branch] gh/vkuzo/7/orig -> origin/gh/vkuzo/7/orig 2025-03-17T17:41:36.1989209Z * [new branch] gh/vkuzo/8/base -> origin/gh/vkuzo/8/base 2025-03-17T17:41:36.1990110Z * [new branch] gh/vkuzo/8/head -> origin/gh/vkuzo/8/head 2025-03-17T17:41:36.1991063Z * [new branch] gh/vkuzo/8/orig -> origin/gh/vkuzo/8/orig 2025-03-17T17:41:36.1992421Z * [new branch] gh/vkuzo/9/base -> origin/gh/vkuzo/9/base 2025-03-17T17:41:36.1993283Z * [new branch] gh/vkuzo/9/head -> origin/gh/vkuzo/9/head 2025-03-17T17:41:36.1994250Z * [new branch] gh/vkuzo/9/orig -> origin/gh/vkuzo/9/orig 2025-03-17T17:41:36.1995920Z * [new branch] gh/vmoens/10/base -> origin/gh/vmoens/10/base 2025-03-17T17:41:36.1996764Z * [new branch] gh/vmoens/10/head -> origin/gh/vmoens/10/head 2025-03-17T17:41:36.1997795Z * [new branch] gh/vmoens/10/orig -> origin/gh/vmoens/10/orig 2025-03-17T17:41:36.1999265Z * [new branch] gh/vmoens/15/base -> origin/gh/vmoens/15/base 2025-03-17T17:41:36.2000568Z * [new branch] gh/vmoens/15/head -> origin/gh/vmoens/15/head 2025-03-17T17:41:36.2001406Z * [new branch] gh/vmoens/15/orig -> origin/gh/vmoens/15/orig 2025-03-17T17:41:36.2003240Z * [new branch] gh/vmoens/16/base -> origin/gh/vmoens/16/base 2025-03-17T17:41:36.2004138Z * [new branch] gh/vmoens/16/head -> origin/gh/vmoens/16/head 2025-03-17T17:41:36.2005139Z * [new branch] gh/vmoens/16/orig -> origin/gh/vmoens/16/orig 2025-03-17T17:41:36.2006462Z * [new branch] gh/vmoens/17/base -> origin/gh/vmoens/17/base 2025-03-17T17:41:36.2007361Z * [new branch] gh/vmoens/17/head -> origin/gh/vmoens/17/head 2025-03-17T17:41:36.2008421Z * [new branch] gh/vmoens/17/orig -> origin/gh/vmoens/17/orig 2025-03-17T17:41:36.2009782Z * [new branch] gh/vmoens/18/base -> origin/gh/vmoens/18/base 2025-03-17T17:41:36.2010635Z * [new branch] gh/vmoens/18/head -> origin/gh/vmoens/18/head 2025-03-17T17:41:36.2011620Z * [new branch] gh/vmoens/18/orig -> origin/gh/vmoens/18/orig 2025-03-17T17:41:36.2012965Z * [new branch] gh/vmoens/19/base -> origin/gh/vmoens/19/base 2025-03-17T17:41:36.2013881Z * [new branch] gh/vmoens/19/head -> origin/gh/vmoens/19/head 2025-03-17T17:41:36.2014841Z * [new branch] gh/vmoens/19/orig -> origin/gh/vmoens/19/orig 2025-03-17T17:41:36.2016155Z * [new branch] gh/vmoens/20/base -> origin/gh/vmoens/20/base 2025-03-17T17:41:36.2017070Z * [new branch] gh/vmoens/20/head -> origin/gh/vmoens/20/head 2025-03-17T17:41:36.2018028Z * [new branch] gh/vmoens/20/orig -> origin/gh/vmoens/20/orig 2025-03-17T17:41:36.2020072Z * [new branch] gh/voznesenskym/231/base -> origin/gh/voznesenskym/231/base 2025-03-17T17:41:36.2021082Z * [new branch] gh/voznesenskym/231/head -> origin/gh/voznesenskym/231/head 2025-03-17T17:41:36.2022085Z * [new branch] gh/voznesenskym/231/orig -> origin/gh/voznesenskym/231/orig 2025-03-17T17:41:36.2023553Z * [new branch] gh/voznesenskym/254/base -> origin/gh/voznesenskym/254/base 2025-03-17T17:41:36.2024524Z * [new branch] gh/voznesenskym/254/head -> origin/gh/voznesenskym/254/head 2025-03-17T17:41:36.2025581Z * [new branch] gh/voznesenskym/254/orig -> origin/gh/voznesenskym/254/orig 2025-03-17T17:41:36.2027279Z * [new branch] gh/wanchaol/360/base -> origin/gh/wanchaol/360/base 2025-03-17T17:41:36.2028277Z * [new branch] gh/wanchaol/360/head -> origin/gh/wanchaol/360/head 2025-03-17T17:41:36.2029329Z * [new branch] gh/wanchaol/360/orig -> origin/gh/wanchaol/360/orig 2025-03-17T17:41:36.2030781Z * [new branch] gh/wanchaol/367/base -> origin/gh/wanchaol/367/base 2025-03-17T17:41:36.2031799Z * [new branch] gh/wanchaol/367/head -> origin/gh/wanchaol/367/head 2025-03-17T17:41:36.2032859Z * [new branch] gh/wanchaol/367/orig -> origin/gh/wanchaol/367/orig 2025-03-17T17:41:36.2034284Z * [new branch] gh/wanchaol/368/base -> origin/gh/wanchaol/368/base 2025-03-17T17:41:36.2035183Z * [new branch] gh/wanchaol/368/head -> origin/gh/wanchaol/368/head 2025-03-17T17:41:36.2036194Z * [new branch] gh/wanchaol/368/orig -> origin/gh/wanchaol/368/orig 2025-03-17T17:41:36.2040267Z * [new branch] gh/wconstab/204/base -> origin/gh/wconstab/204/base 2025-03-17T17:41:36.2041370Z * [new branch] gh/wconstab/204/orig -> origin/gh/wconstab/204/orig 2025-03-17T17:41:36.2042872Z * [new branch] gh/wconstab/380/base -> origin/gh/wconstab/380/base 2025-03-17T17:41:36.2043645Z * [new branch] gh/wconstab/380/head -> origin/gh/wconstab/380/head 2025-03-17T17:41:36.2044647Z * [new branch] gh/wconstab/380/orig -> origin/gh/wconstab/380/orig 2025-03-17T17:41:36.2046115Z * [new branch] gh/wconstab/392/base -> origin/gh/wconstab/392/base 2025-03-17T17:41:36.2046965Z * [new branch] gh/wconstab/392/head -> origin/gh/wconstab/392/head 2025-03-17T17:41:36.2047948Z * [new branch] gh/wconstab/392/orig -> origin/gh/wconstab/392/orig 2025-03-17T17:41:36.2049642Z * [new branch] gh/wconstab/395/base -> origin/gh/wconstab/395/base 2025-03-17T17:41:36.2050546Z * [new branch] gh/wconstab/395/head -> origin/gh/wconstab/395/head 2025-03-17T17:41:36.2052053Z * [new branch] gh/wconstab/395/orig -> origin/gh/wconstab/395/orig 2025-03-17T17:41:36.2053508Z * [new branch] gh/wconstab/396/base -> origin/gh/wconstab/396/base 2025-03-17T17:41:36.2054465Z * [new branch] gh/wconstab/396/head -> origin/gh/wconstab/396/head 2025-03-17T17:41:36.2055530Z * [new branch] gh/wconstab/396/orig -> origin/gh/wconstab/396/orig 2025-03-17T17:41:36.2056976Z * [new branch] gh/wconstab/397/base -> origin/gh/wconstab/397/base 2025-03-17T17:41:36.2057877Z * [new branch] gh/wconstab/397/head -> origin/gh/wconstab/397/head 2025-03-17T17:41:36.2058838Z * [new branch] gh/wconstab/397/orig -> origin/gh/wconstab/397/orig 2025-03-17T17:41:36.2060511Z * [new branch] gh/weifengpy/21/base -> origin/gh/weifengpy/21/base 2025-03-17T17:41:36.2061437Z * [new branch] gh/weifengpy/21/head -> origin/gh/weifengpy/21/head 2025-03-17T17:41:36.2062412Z * [new branch] gh/weifengpy/21/orig -> origin/gh/weifengpy/21/orig 2025-03-17T17:41:36.2063871Z * [new branch] gh/weifengpy/22/base -> origin/gh/weifengpy/22/base 2025-03-17T17:41:36.2064835Z * [new branch] gh/weifengpy/22/head -> origin/gh/weifengpy/22/head 2025-03-17T17:41:36.2065799Z * [new branch] gh/weifengpy/22/orig -> origin/gh/weifengpy/22/orig 2025-03-17T17:41:36.2067595Z * [new branch] gh/williamwen42/196/base -> origin/gh/williamwen42/196/base 2025-03-17T17:41:36.2068540Z * [new branch] gh/williamwen42/196/head -> origin/gh/williamwen42/196/head 2025-03-17T17:41:36.2069490Z * [new branch] gh/williamwen42/196/orig -> origin/gh/williamwen42/196/orig 2025-03-17T17:41:36.2070849Z * [new branch] gh/williamwen42/197/base -> origin/gh/williamwen42/197/base 2025-03-17T17:41:36.2071759Z * [new branch] gh/williamwen42/197/head -> origin/gh/williamwen42/197/head 2025-03-17T17:41:36.2072761Z * [new branch] gh/williamwen42/197/orig -> origin/gh/williamwen42/197/orig 2025-03-17T17:41:36.2074374Z * [new branch] gh/williamwen42/199/base -> origin/gh/williamwen42/199/base 2025-03-17T17:41:36.2075283Z * [new branch] gh/williamwen42/199/head -> origin/gh/williamwen42/199/head 2025-03-17T17:41:36.2076278Z * [new branch] gh/williamwen42/199/orig -> origin/gh/williamwen42/199/orig 2025-03-17T17:41:36.2077811Z * [new branch] gh/williamwen42/200/base -> origin/gh/williamwen42/200/base 2025-03-17T17:41:36.2078840Z * [new branch] gh/williamwen42/200/head -> origin/gh/williamwen42/200/head 2025-03-17T17:41:36.2079746Z * [new branch] gh/williamwen42/200/orig -> origin/gh/williamwen42/200/orig 2025-03-17T17:41:36.2081271Z * [new branch] gh/williamwen42/201/base -> origin/gh/williamwen42/201/base 2025-03-17T17:41:36.2082195Z * [new branch] gh/williamwen42/201/head -> origin/gh/williamwen42/201/head 2025-03-17T17:41:36.2083364Z * [new branch] gh/williamwen42/201/orig -> origin/gh/williamwen42/201/orig 2025-03-17T17:41:36.2084564Z * [new branch] gh/williamwen42/204/base -> origin/gh/williamwen42/204/base 2025-03-17T17:41:36.2085488Z * [new branch] gh/williamwen42/204/head -> origin/gh/williamwen42/204/head 2025-03-17T17:41:36.2086451Z * [new branch] gh/williamwen42/204/orig -> origin/gh/williamwen42/204/orig 2025-03-17T17:41:36.2087915Z * [new branch] gh/williamwen42/205/base -> origin/gh/williamwen42/205/base 2025-03-17T17:41:36.2088955Z * [new branch] gh/williamwen42/205/head -> origin/gh/williamwen42/205/head 2025-03-17T17:41:36.2089929Z * [new branch] gh/williamwen42/205/orig -> origin/gh/williamwen42/205/orig 2025-03-17T17:41:36.2091465Z * [new branch] gh/williamwen42/206/base -> origin/gh/williamwen42/206/base 2025-03-17T17:41:36.2092452Z * [new branch] gh/williamwen42/206/head -> origin/gh/williamwen42/206/head 2025-03-17T17:41:36.2093480Z * [new branch] gh/williamwen42/206/orig -> origin/gh/williamwen42/206/orig 2025-03-17T17:41:36.2094814Z * [new branch] gh/williamwen42/207/base -> origin/gh/williamwen42/207/base 2025-03-17T17:41:36.2096316Z * [new branch] gh/williamwen42/207/head -> origin/gh/williamwen42/207/head 2025-03-17T17:41:36.2097231Z * [new branch] gh/williamwen42/207/orig -> origin/gh/williamwen42/207/orig 2025-03-17T17:41:36.2099112Z * [new branch] gh/williamwen42/208/base -> origin/gh/williamwen42/208/base 2025-03-17T17:41:36.2100068Z * [new branch] gh/williamwen42/208/head -> origin/gh/williamwen42/208/head 2025-03-17T17:41:36.2101056Z * [new branch] gh/williamwen42/208/orig -> origin/gh/williamwen42/208/orig 2025-03-17T17:41:36.2102310Z * [new branch] gh/williamwen42/209/base -> origin/gh/williamwen42/209/base 2025-03-17T17:41:36.2103210Z * [new branch] gh/williamwen42/209/head -> origin/gh/williamwen42/209/head 2025-03-17T17:41:36.2104172Z * [new branch] gh/williamwen42/209/orig -> origin/gh/williamwen42/209/orig 2025-03-17T17:41:36.2105565Z * [new branch] gh/williamwen42/210/base -> origin/gh/williamwen42/210/base 2025-03-17T17:41:36.2106533Z * [new branch] gh/williamwen42/210/head -> origin/gh/williamwen42/210/head 2025-03-17T17:41:36.2107597Z * [new branch] gh/williamwen42/210/orig -> origin/gh/williamwen42/210/orig 2025-03-17T17:41:36.2108864Z * [new branch] gh/williamwen42/211/base -> origin/gh/williamwen42/211/base 2025-03-17T17:41:36.2109754Z * [new branch] gh/williamwen42/211/head -> origin/gh/williamwen42/211/head 2025-03-17T17:41:36.2111180Z * [new branch] gh/williamwen42/211/orig -> origin/gh/williamwen42/211/orig 2025-03-17T17:41:36.2112557Z * [new branch] gh/williamwen42/212/base -> origin/gh/williamwen42/212/base 2025-03-17T17:41:36.2113462Z * [new branch] gh/williamwen42/212/head -> origin/gh/williamwen42/212/head 2025-03-17T17:41:36.2114433Z * [new branch] gh/williamwen42/212/orig -> origin/gh/williamwen42/212/orig 2025-03-17T17:41:36.2115824Z * [new branch] gh/williamwen42/213/base -> origin/gh/williamwen42/213/base 2025-03-17T17:41:36.2116797Z * [new branch] gh/williamwen42/213/head -> origin/gh/williamwen42/213/head 2025-03-17T17:41:36.2117789Z * [new branch] gh/williamwen42/213/orig -> origin/gh/williamwen42/213/orig 2025-03-17T17:41:36.2119296Z * [new branch] gh/williamwen42/214/base -> origin/gh/williamwen42/214/base 2025-03-17T17:41:36.2120143Z * [new branch] gh/williamwen42/214/head -> origin/gh/williamwen42/214/head 2025-03-17T17:41:36.2121072Z * [new branch] gh/williamwen42/214/orig -> origin/gh/williamwen42/214/orig 2025-03-17T17:41:36.2123009Z * [new branch] gh/williamwen42/215/base -> origin/gh/williamwen42/215/base 2025-03-17T17:41:36.2123789Z * [new branch] gh/williamwen42/215/head -> origin/gh/williamwen42/215/head 2025-03-17T17:41:36.2124833Z * [new branch] gh/williamwen42/215/orig -> origin/gh/williamwen42/215/orig 2025-03-17T17:41:36.2126443Z * [new branch] gh/williamwen42/216/base -> origin/gh/williamwen42/216/base 2025-03-17T17:41:36.2127322Z * [new branch] gh/williamwen42/216/head -> origin/gh/williamwen42/216/head 2025-03-17T17:41:36.2128321Z * [new branch] gh/williamwen42/216/orig -> origin/gh/williamwen42/216/orig 2025-03-17T17:41:36.2129825Z * [new branch] gh/williamwen42/217/base -> origin/gh/williamwen42/217/base 2025-03-17T17:41:36.2131253Z * [new branch] gh/williamwen42/217/head -> origin/gh/williamwen42/217/head 2025-03-17T17:41:36.2132187Z * [new branch] gh/williamwen42/217/orig -> origin/gh/williamwen42/217/orig 2025-03-17T17:41:36.2133866Z * [new branch] gh/williamwen42/218/base -> origin/gh/williamwen42/218/base 2025-03-17T17:41:36.2134911Z * [new branch] gh/williamwen42/218/head -> origin/gh/williamwen42/218/head 2025-03-17T17:41:36.2135958Z * [new branch] gh/williamwen42/218/orig -> origin/gh/williamwen42/218/orig 2025-03-17T17:41:36.2137695Z * [new branch] gh/williamwen42/219/base -> origin/gh/williamwen42/219/base 2025-03-17T17:41:36.2138637Z * [new branch] gh/williamwen42/219/head -> origin/gh/williamwen42/219/head 2025-03-17T17:41:36.2139577Z * [new branch] gh/williamwen42/219/orig -> origin/gh/williamwen42/219/orig 2025-03-17T17:41:36.2141217Z * [new branch] gh/wz337/2/base -> origin/gh/wz337/2/base 2025-03-17T17:41:36.2142126Z * [new branch] gh/wz337/2/head -> origin/gh/wz337/2/head 2025-03-17T17:41:36.2143278Z * [new branch] gh/wz337/3/base -> origin/gh/wz337/3/base 2025-03-17T17:41:36.2144178Z * [new branch] gh/wz337/3/head -> origin/gh/wz337/3/head 2025-03-17T17:41:36.2145800Z * [new branch] gh/xmfan/138/base -> origin/gh/xmfan/138/base 2025-03-17T17:41:36.2146840Z * [new branch] gh/xmfan/138/head -> origin/gh/xmfan/138/head 2025-03-17T17:41:36.2147823Z * [new branch] gh/xmfan/138/orig -> origin/gh/xmfan/138/orig 2025-03-17T17:41:36.2149364Z * [new branch] gh/xmfan/140/base -> origin/gh/xmfan/140/base 2025-03-17T17:41:36.2150192Z * [new branch] gh/xmfan/140/head -> origin/gh/xmfan/140/head 2025-03-17T17:41:36.2151213Z * [new branch] gh/xmfan/140/orig -> origin/gh/xmfan/140/orig 2025-03-17T17:41:36.2152573Z * [new branch] gh/xmfan/157/base -> origin/gh/xmfan/157/base 2025-03-17T17:41:36.2153458Z * [new branch] gh/xmfan/157/head -> origin/gh/xmfan/157/head 2025-03-17T17:41:36.2154421Z * [new branch] gh/xmfan/157/orig -> origin/gh/xmfan/157/orig 2025-03-17T17:41:36.2155757Z * [new branch] gh/xmfan/166/base -> origin/gh/xmfan/166/base 2025-03-17T17:41:36.2156652Z * [new branch] gh/xmfan/166/head -> origin/gh/xmfan/166/head 2025-03-17T17:41:36.2157666Z * [new branch] gh/xmfan/166/orig -> origin/gh/xmfan/166/orig 2025-03-17T17:41:36.2159047Z * [new branch] gh/xmfan/169/base -> origin/gh/xmfan/169/base 2025-03-17T17:41:36.2159903Z * [new branch] gh/xmfan/169/head -> origin/gh/xmfan/169/head 2025-03-17T17:41:36.2161168Z * [new branch] gh/xmfan/170/base -> origin/gh/xmfan/170/base 2025-03-17T17:41:36.2161965Z * [new branch] gh/xmfan/170/head -> origin/gh/xmfan/170/head 2025-03-17T17:41:36.2163554Z * [new branch] gh/xmfan/173/base -> origin/gh/xmfan/173/base 2025-03-17T17:41:36.2164290Z * [new branch] gh/xmfan/173/head -> origin/gh/xmfan/173/head 2025-03-17T17:41:36.2165226Z * [new branch] gh/xmfan/173/orig -> origin/gh/xmfan/173/orig 2025-03-17T17:41:36.2166989Z * [new branch] gh/xmfan/174/base -> origin/gh/xmfan/174/base 2025-03-17T17:41:36.2167876Z * [new branch] gh/xmfan/174/head -> origin/gh/xmfan/174/head 2025-03-17T17:41:36.2168862Z * [new branch] gh/xmfan/174/orig -> origin/gh/xmfan/174/orig 2025-03-17T17:41:36.2170180Z * [new branch] gh/xmfan/177/base -> origin/gh/xmfan/177/base 2025-03-17T17:41:36.2171063Z * [new branch] gh/xmfan/177/head -> origin/gh/xmfan/177/head 2025-03-17T17:41:36.2171995Z * [new branch] gh/xmfan/177/orig -> origin/gh/xmfan/177/orig 2025-03-17T17:41:36.2173372Z * [new branch] gh/xmfan/178/base -> origin/gh/xmfan/178/base 2025-03-17T17:41:36.2174238Z * [new branch] gh/xmfan/178/head -> origin/gh/xmfan/178/head 2025-03-17T17:41:36.2175222Z * [new branch] gh/xmfan/178/orig -> origin/gh/xmfan/178/orig 2025-03-17T17:41:36.2176547Z * [new branch] gh/xmfan/179/base -> origin/gh/xmfan/179/base 2025-03-17T17:41:36.2177387Z * [new branch] gh/xmfan/179/head -> origin/gh/xmfan/179/head 2025-03-17T17:41:36.2178345Z * [new branch] gh/xmfan/179/orig -> origin/gh/xmfan/179/orig 2025-03-17T17:41:36.2179790Z * [new branch] gh/xmfan/18/base -> origin/gh/xmfan/18/base 2025-03-17T17:41:36.2180685Z * [new branch] gh/xmfan/18/head -> origin/gh/xmfan/18/head 2025-03-17T17:41:36.2182055Z * [new branch] gh/xmfan/180/base -> origin/gh/xmfan/180/base 2025-03-17T17:41:36.2182937Z * [new branch] gh/xmfan/180/head -> origin/gh/xmfan/180/head 2025-03-17T17:41:36.2183853Z * [new branch] gh/xmfan/180/orig -> origin/gh/xmfan/180/orig 2025-03-17T17:41:36.2186060Z * [new branch] gh/xmfan/181/base -> origin/gh/xmfan/181/base 2025-03-17T17:41:36.2187024Z * [new branch] gh/xmfan/181/head -> origin/gh/xmfan/181/head 2025-03-17T17:41:36.2188014Z * [new branch] gh/xmfan/181/orig -> origin/gh/xmfan/181/orig 2025-03-17T17:41:36.2189428Z * [new branch] gh/xmfan/182/base -> origin/gh/xmfan/182/base 2025-03-17T17:41:36.2190404Z * [new branch] gh/xmfan/182/head -> origin/gh/xmfan/182/head 2025-03-17T17:41:36.2191380Z * [new branch] gh/xmfan/182/orig -> origin/gh/xmfan/182/orig 2025-03-17T17:41:36.2192685Z * [new branch] gh/xmfan/183/base -> origin/gh/xmfan/183/base 2025-03-17T17:41:36.2193601Z * [new branch] gh/xmfan/183/head -> origin/gh/xmfan/183/head 2025-03-17T17:41:36.2194539Z * [new branch] gh/xmfan/183/orig -> origin/gh/xmfan/183/orig 2025-03-17T17:41:36.2195886Z * [new branch] gh/xmfan/184/base -> origin/gh/xmfan/184/base 2025-03-17T17:41:36.2196766Z * [new branch] gh/xmfan/184/head -> origin/gh/xmfan/184/head 2025-03-17T17:41:36.2197766Z * [new branch] gh/xmfan/184/orig -> origin/gh/xmfan/184/orig 2025-03-17T17:41:36.2199495Z * [new branch] gh/xmfan/185/base -> origin/gh/xmfan/185/base 2025-03-17T17:41:36.2200388Z * [new branch] gh/xmfan/185/head -> origin/gh/xmfan/185/head 2025-03-17T17:41:36.2201381Z * [new branch] gh/xmfan/185/orig -> origin/gh/xmfan/185/orig 2025-03-17T17:41:36.2202802Z * [new branch] gh/xmfan/186/base -> origin/gh/xmfan/186/base 2025-03-17T17:41:36.2203766Z * [new branch] gh/xmfan/186/head -> origin/gh/xmfan/186/head 2025-03-17T17:41:36.2204591Z * [new branch] gh/xmfan/186/orig -> origin/gh/xmfan/186/orig 2025-03-17T17:41:36.2205987Z * [new branch] gh/xmfan/187/base -> origin/gh/xmfan/187/base 2025-03-17T17:41:36.2206885Z * [new branch] gh/xmfan/187/head -> origin/gh/xmfan/187/head 2025-03-17T17:41:36.2207876Z * [new branch] gh/xmfan/187/orig -> origin/gh/xmfan/187/orig 2025-03-17T17:41:36.2209604Z * [new branch] gh/xmfan/188/base -> origin/gh/xmfan/188/base 2025-03-17T17:41:36.2210525Z * [new branch] gh/xmfan/188/head -> origin/gh/xmfan/188/head 2025-03-17T17:41:36.2211485Z * [new branch] gh/xmfan/188/orig -> origin/gh/xmfan/188/orig 2025-03-17T17:41:36.2212844Z * [new branch] gh/xmfan/189/base -> origin/gh/xmfan/189/base 2025-03-17T17:41:36.2213779Z * [new branch] gh/xmfan/189/head -> origin/gh/xmfan/189/head 2025-03-17T17:41:36.2214767Z * [new branch] gh/xmfan/189/orig -> origin/gh/xmfan/189/orig 2025-03-17T17:41:36.2216173Z * [new branch] gh/xmfan/190/base -> origin/gh/xmfan/190/base 2025-03-17T17:41:36.2217057Z * [new branch] gh/xmfan/190/head -> origin/gh/xmfan/190/head 2025-03-17T17:41:36.2218037Z * [new branch] gh/xmfan/190/orig -> origin/gh/xmfan/190/orig 2025-03-17T17:41:36.2219378Z * [new branch] gh/xmfan/191/base -> origin/gh/xmfan/191/base 2025-03-17T17:41:36.2220790Z * [new branch] gh/xmfan/191/head -> origin/gh/xmfan/191/head 2025-03-17T17:41:36.2221669Z * [new branch] gh/xmfan/191/orig -> origin/gh/xmfan/191/orig 2025-03-17T17:41:36.2223412Z * [new branch] gh/xmfan/192/base -> origin/gh/xmfan/192/base 2025-03-17T17:41:36.2224401Z * [new branch] gh/xmfan/192/head -> origin/gh/xmfan/192/head 2025-03-17T17:41:36.2225324Z * [new branch] gh/xmfan/192/orig -> origin/gh/xmfan/192/orig 2025-03-17T17:41:36.2226793Z * [new branch] gh/xmfan/193/base -> origin/gh/xmfan/193/base 2025-03-17T17:41:36.2227741Z * [new branch] gh/xmfan/193/head -> origin/gh/xmfan/193/head 2025-03-17T17:41:36.2228746Z * [new branch] gh/xmfan/193/orig -> origin/gh/xmfan/193/orig 2025-03-17T17:41:36.2230086Z * [new branch] gh/xmfan/194/base -> origin/gh/xmfan/194/base 2025-03-17T17:41:36.2230981Z * [new branch] gh/xmfan/194/head -> origin/gh/xmfan/194/head 2025-03-17T17:41:36.2231902Z * [new branch] gh/xmfan/194/orig -> origin/gh/xmfan/194/orig 2025-03-17T17:41:36.2233308Z * [new branch] gh/xmfan/195/base -> origin/gh/xmfan/195/base 2025-03-17T17:41:36.2234202Z * [new branch] gh/xmfan/195/head -> origin/gh/xmfan/195/head 2025-03-17T17:41:36.2235152Z * [new branch] gh/xmfan/195/orig -> origin/gh/xmfan/195/orig 2025-03-17T17:41:36.2236542Z * [new branch] gh/xmfan/196/base -> origin/gh/xmfan/196/base 2025-03-17T17:41:36.2237621Z * [new branch] gh/xmfan/196/head -> origin/gh/xmfan/196/head 2025-03-17T17:41:36.2238697Z * [new branch] gh/xmfan/196/orig -> origin/gh/xmfan/196/orig 2025-03-17T17:41:36.2239971Z * [new branch] gh/xmfan/197/base -> origin/gh/xmfan/197/base 2025-03-17T17:41:36.2240894Z * [new branch] gh/xmfan/197/head -> origin/gh/xmfan/197/head 2025-03-17T17:41:36.2241850Z * [new branch] gh/xmfan/197/orig -> origin/gh/xmfan/197/orig 2025-03-17T17:41:36.2243129Z * [new branch] gh/xmfan/198/base -> origin/gh/xmfan/198/base 2025-03-17T17:41:36.2244166Z * [new branch] gh/xmfan/198/head -> origin/gh/xmfan/198/head 2025-03-17T17:41:36.2245018Z * [new branch] gh/xmfan/198/orig -> origin/gh/xmfan/198/orig 2025-03-17T17:41:36.2246356Z * [new branch] gh/xmfan/199/base -> origin/gh/xmfan/199/base 2025-03-17T17:41:36.2247276Z * [new branch] gh/xmfan/199/head -> origin/gh/xmfan/199/head 2025-03-17T17:41:36.2248762Z * [new branch] gh/xmfan/199/orig -> origin/gh/xmfan/199/orig 2025-03-17T17:41:36.2250170Z * [new branch] gh/xmfan/200/base -> origin/gh/xmfan/200/base 2025-03-17T17:41:36.2251153Z * [new branch] gh/xmfan/200/head -> origin/gh/xmfan/200/head 2025-03-17T17:41:36.2252099Z * [new branch] gh/xmfan/200/orig -> origin/gh/xmfan/200/orig 2025-03-17T17:41:36.2253370Z * [new branch] gh/xmfan/201/base -> origin/gh/xmfan/201/base 2025-03-17T17:41:36.2254650Z * [new branch] gh/xmfan/201/head -> origin/gh/xmfan/201/head 2025-03-17T17:41:36.2255627Z * [new branch] gh/xmfan/201/orig -> origin/gh/xmfan/201/orig 2025-03-17T17:41:36.2256900Z * [new branch] gh/xmfan/202/base -> origin/gh/xmfan/202/base 2025-03-17T17:41:36.2257861Z * [new branch] gh/xmfan/202/head -> origin/gh/xmfan/202/head 2025-03-17T17:41:36.2258860Z * [new branch] gh/xmfan/202/orig -> origin/gh/xmfan/202/orig 2025-03-17T17:41:36.2260142Z * [new branch] gh/xmfan/203/base -> origin/gh/xmfan/203/base 2025-03-17T17:41:36.2261127Z * [new branch] gh/xmfan/203/head -> origin/gh/xmfan/203/head 2025-03-17T17:41:36.2262087Z * [new branch] gh/xmfan/203/orig -> origin/gh/xmfan/203/orig 2025-03-17T17:41:36.2263749Z * [new branch] gh/xuanzhang816/10/base -> origin/gh/xuanzhang816/10/base 2025-03-17T17:41:36.2264784Z * [new branch] gh/xuanzhang816/10/head -> origin/gh/xuanzhang816/10/head 2025-03-17T17:41:36.2265745Z * [new branch] gh/xuanzhang816/10/orig -> origin/gh/xuanzhang816/10/orig 2025-03-17T17:41:36.2267241Z * [new branch] gh/xuanzhang816/11/base -> origin/gh/xuanzhang816/11/base 2025-03-17T17:41:36.2268174Z * [new branch] gh/xuanzhang816/11/head -> origin/gh/xuanzhang816/11/head 2025-03-17T17:41:36.2269133Z * [new branch] gh/xuanzhang816/11/orig -> origin/gh/xuanzhang816/11/orig 2025-03-17T17:41:36.2270496Z * [new branch] gh/xuanzhang816/13/base -> origin/gh/xuanzhang816/13/base 2025-03-17T17:41:36.2271504Z * [new branch] gh/xuanzhang816/13/head -> origin/gh/xuanzhang816/13/head 2025-03-17T17:41:36.2272668Z * [new branch] gh/xuanzhang816/13/orig -> origin/gh/xuanzhang816/13/orig 2025-03-17T17:41:36.2274256Z * [new branch] gh/xuhancn/1/base -> origin/gh/xuhancn/1/base 2025-03-17T17:41:36.2275212Z * [new branch] gh/xuhancn/1/head -> origin/gh/xuhancn/1/head 2025-03-17T17:41:36.2276378Z * [new branch] gh/xuhancn/2/base -> origin/gh/xuhancn/2/base 2025-03-17T17:41:36.2277305Z * [new branch] gh/xuhancn/2/head -> origin/gh/xuhancn/2/head 2025-03-17T17:41:36.2278472Z * [new branch] gh/xuhancn/3/base -> origin/gh/xuhancn/3/base 2025-03-17T17:41:36.2279404Z * [new branch] gh/xuhancn/3/head -> origin/gh/xuhancn/3/head 2025-03-17T17:41:36.2280574Z * [new branch] gh/xuhancn/4/base -> origin/gh/xuhancn/4/base 2025-03-17T17:41:36.2281522Z * [new branch] gh/xuhancn/4/head -> origin/gh/xuhancn/4/head 2025-03-17T17:41:36.2282653Z * [new branch] gh/xuhancn/5/base -> origin/gh/xuhancn/5/base 2025-03-17T17:41:36.2283658Z * [new branch] gh/xuhancn/5/head -> origin/gh/xuhancn/5/head 2025-03-17T17:41:36.2284724Z * [new branch] gh/xuhancn/6/base -> origin/gh/xuhancn/6/base 2025-03-17T17:41:36.2285654Z * [new branch] gh/xuhancn/6/head -> origin/gh/xuhancn/6/head 2025-03-17T17:41:36.2286776Z * [new branch] gh/xuhancn/7/base -> origin/gh/xuhancn/7/base 2025-03-17T17:41:36.2287678Z * [new branch] gh/xuhancn/7/head -> origin/gh/xuhancn/7/head 2025-03-17T17:41:36.2289214Z * [new branch] gh/xunnanxu/1/base -> origin/gh/xunnanxu/1/base 2025-03-17T17:41:36.2290103Z * [new branch] gh/xunnanxu/1/head -> origin/gh/xunnanxu/1/head 2025-03-17T17:41:36.2291074Z * [new branch] gh/xunnanxu/1/orig -> origin/gh/xunnanxu/1/orig 2025-03-17T17:41:36.2292308Z * [new branch] gh/xunnanxu/2/base -> origin/gh/xunnanxu/2/base 2025-03-17T17:41:36.2293103Z * [new branch] gh/xunnanxu/2/head -> origin/gh/xunnanxu/2/head 2025-03-17T17:41:36.2294183Z * [new branch] gh/xunnanxu/2/orig -> origin/gh/xunnanxu/2/orig 2025-03-17T17:41:36.2295466Z * [new branch] gh/xunnanxu/3/base -> origin/gh/xunnanxu/3/base 2025-03-17T17:41:36.2296430Z * [new branch] gh/xunnanxu/3/head -> origin/gh/xunnanxu/3/head 2025-03-17T17:41:36.2297490Z * [new branch] gh/xunnanxu/3/orig -> origin/gh/xunnanxu/3/orig 2025-03-17T17:41:36.2298699Z * [new branch] gh/xunnanxu/4/base -> origin/gh/xunnanxu/4/base 2025-03-17T17:41:36.2299672Z * [new branch] gh/xunnanxu/4/head -> origin/gh/xunnanxu/4/head 2025-03-17T17:41:36.2300631Z * [new branch] gh/xunnanxu/4/orig -> origin/gh/xunnanxu/4/orig 2025-03-17T17:41:36.2302288Z * [new branch] gh/yanbing-j/11/base -> origin/gh/yanbing-j/11/base 2025-03-17T17:41:36.2303328Z * [new branch] gh/yanbing-j/11/head -> origin/gh/yanbing-j/11/head 2025-03-17T17:41:36.2304266Z * [new branch] gh/yanbing-j/11/orig -> origin/gh/yanbing-j/11/orig 2025-03-17T17:41:36.2305649Z * [new branch] gh/yanbing-j/12/base -> origin/gh/yanbing-j/12/base 2025-03-17T17:41:36.2306672Z * [new branch] gh/yanbing-j/12/head -> origin/gh/yanbing-j/12/head 2025-03-17T17:41:36.2307757Z * [new branch] gh/yanbing-j/12/orig -> origin/gh/yanbing-j/12/orig 2025-03-17T17:41:36.2309001Z * [new branch] gh/yanbing-j/13/base -> origin/gh/yanbing-j/13/base 2025-03-17T17:41:36.2309970Z * [new branch] gh/yanbing-j/13/head -> origin/gh/yanbing-j/13/head 2025-03-17T17:41:36.2310929Z * [new branch] gh/yanbing-j/13/orig -> origin/gh/yanbing-j/13/orig 2025-03-17T17:41:36.2312244Z * [new branch] gh/yanbing-j/14/base -> origin/gh/yanbing-j/14/base 2025-03-17T17:41:36.2313213Z * [new branch] gh/yanbing-j/14/head -> origin/gh/yanbing-j/14/head 2025-03-17T17:41:36.2314193Z * [new branch] gh/yanbing-j/14/orig -> origin/gh/yanbing-j/14/orig 2025-03-17T17:41:36.2315452Z * [new branch] gh/yanbing-j/15/base -> origin/gh/yanbing-j/15/base 2025-03-17T17:41:36.2316427Z * [new branch] gh/yanbing-j/15/head -> origin/gh/yanbing-j/15/head 2025-03-17T17:41:36.2317364Z * [new branch] gh/yanbing-j/15/orig -> origin/gh/yanbing-j/15/orig 2025-03-17T17:41:36.2318599Z * [new branch] gh/yanbing-j/18/base -> origin/gh/yanbing-j/18/base 2025-03-17T17:41:36.2319561Z * [new branch] gh/yanbing-j/18/head -> origin/gh/yanbing-j/18/head 2025-03-17T17:41:36.2320498Z * [new branch] gh/yanbing-j/18/orig -> origin/gh/yanbing-j/18/orig 2025-03-17T17:41:36.2322042Z * [new branch] gh/yanbing-j/19/base -> origin/gh/yanbing-j/19/base 2025-03-17T17:41:36.2322806Z * [new branch] gh/yanbing-j/19/head -> origin/gh/yanbing-j/19/head 2025-03-17T17:41:36.2323817Z * [new branch] gh/yanbing-j/19/orig -> origin/gh/yanbing-j/19/orig 2025-03-17T17:41:36.2325125Z * [new branch] gh/yanbing-j/20/base -> origin/gh/yanbing-j/20/base 2025-03-17T17:41:36.2326063Z * [new branch] gh/yanbing-j/20/head -> origin/gh/yanbing-j/20/head 2025-03-17T17:41:36.2327032Z * [new branch] gh/yanbing-j/20/orig -> origin/gh/yanbing-j/20/orig 2025-03-17T17:41:36.2328283Z * [new branch] gh/yanbing-j/21/base -> origin/gh/yanbing-j/21/base 2025-03-17T17:41:36.2329279Z * [new branch] gh/yanbing-j/21/head -> origin/gh/yanbing-j/21/head 2025-03-17T17:41:36.2330566Z * [new branch] gh/yanbing-j/22/base -> origin/gh/yanbing-j/22/base 2025-03-17T17:41:36.2331560Z * [new branch] gh/yanbing-j/22/head -> origin/gh/yanbing-j/22/head 2025-03-17T17:41:36.2332509Z * [new branch] gh/yanbing-j/22/orig -> origin/gh/yanbing-j/22/orig 2025-03-17T17:41:36.2333776Z * [new branch] gh/yanbing-j/23/base -> origin/gh/yanbing-j/23/base 2025-03-17T17:41:36.2334742Z * [new branch] gh/yanbing-j/23/head -> origin/gh/yanbing-j/23/head 2025-03-17T17:41:36.2335683Z * [new branch] gh/yanbing-j/23/orig -> origin/gh/yanbing-j/23/orig 2025-03-17T17:41:36.2337134Z * [new branch] gh/yanbing-j/24/base -> origin/gh/yanbing-j/24/base 2025-03-17T17:41:36.2338190Z * [new branch] gh/yanbing-j/24/head -> origin/gh/yanbing-j/24/head 2025-03-17T17:41:36.2339153Z * [new branch] gh/yanbing-j/24/orig -> origin/gh/yanbing-j/24/orig 2025-03-17T17:41:36.2340488Z * [new branch] gh/yanbing-j/25/base -> origin/gh/yanbing-j/25/base 2025-03-17T17:41:36.2341555Z * [new branch] gh/yanbing-j/25/head -> origin/gh/yanbing-j/25/head 2025-03-17T17:41:36.2342502Z * [new branch] gh/yanbing-j/25/orig -> origin/gh/yanbing-j/25/orig 2025-03-17T17:41:36.2343887Z * [new branch] gh/yanbing-j/26/base -> origin/gh/yanbing-j/26/base 2025-03-17T17:41:36.2344747Z * [new branch] gh/yanbing-j/26/head -> origin/gh/yanbing-j/26/head 2025-03-17T17:41:36.2345682Z * [new branch] gh/yanbing-j/26/orig -> origin/gh/yanbing-j/26/orig 2025-03-17T17:41:36.2347104Z * [new branch] gh/yanbing-j/28/base -> origin/gh/yanbing-j/28/base 2025-03-17T17:41:36.2347971Z * [new branch] gh/yanbing-j/28/head -> origin/gh/yanbing-j/28/head 2025-03-17T17:41:36.2348950Z * [new branch] gh/yanbing-j/28/orig -> origin/gh/yanbing-j/28/orig 2025-03-17T17:41:36.2350354Z * [new branch] gh/yanbing-j/34/base -> origin/gh/yanbing-j/34/base 2025-03-17T17:41:36.2351326Z * [new branch] gh/yanbing-j/34/head -> origin/gh/yanbing-j/34/head 2025-03-17T17:41:36.2352316Z * [new branch] gh/yanbing-j/34/orig -> origin/gh/yanbing-j/34/orig 2025-03-17T17:41:36.2353681Z * [new branch] gh/yanbing-j/35/base -> origin/gh/yanbing-j/35/base 2025-03-17T17:41:36.2354560Z * [new branch] gh/yanbing-j/35/head -> origin/gh/yanbing-j/35/head 2025-03-17T17:41:36.2355528Z * [new branch] gh/yanbing-j/35/orig -> origin/gh/yanbing-j/35/orig 2025-03-17T17:41:36.2356830Z * [new branch] gh/yanbing-j/36/base -> origin/gh/yanbing-j/36/base 2025-03-17T17:41:36.2357757Z * [new branch] gh/yanbing-j/36/head -> origin/gh/yanbing-j/36/head 2025-03-17T17:41:36.2358726Z * [new branch] gh/yanbing-j/36/orig -> origin/gh/yanbing-j/36/orig 2025-03-17T17:41:36.2360716Z * [new branch] gh/yanbing-j/37/base -> origin/gh/yanbing-j/37/base 2025-03-17T17:41:36.2361522Z * [new branch] gh/yanbing-j/37/head -> origin/gh/yanbing-j/37/head 2025-03-17T17:41:36.2362477Z * [new branch] gh/yanbing-j/37/orig -> origin/gh/yanbing-j/37/orig 2025-03-17T17:41:36.2364224Z * [new branch] gh/yanboliang/62/base -> origin/gh/yanboliang/62/base 2025-03-17T17:41:36.2365080Z * [new branch] gh/yanboliang/62/head -> origin/gh/yanboliang/62/head 2025-03-17T17:41:36.2366059Z * [new branch] gh/yanboliang/62/orig -> origin/gh/yanboliang/62/orig 2025-03-17T17:41:36.2367739Z * [new branch] gh/ydwu4/168/base -> origin/gh/ydwu4/168/base 2025-03-17T17:41:36.2368741Z * [new branch] gh/ydwu4/168/head -> origin/gh/ydwu4/168/head 2025-03-17T17:41:36.2369732Z * [new branch] gh/ydwu4/168/orig -> origin/gh/ydwu4/168/orig 2025-03-17T17:41:36.2371119Z * [new branch] gh/ydwu4/179/base -> origin/gh/ydwu4/179/base 2025-03-17T17:41:36.2372007Z * [new branch] gh/ydwu4/179/head -> origin/gh/ydwu4/179/head 2025-03-17T17:41:36.2372959Z * [new branch] gh/ydwu4/179/orig -> origin/gh/ydwu4/179/orig 2025-03-17T17:41:36.2374563Z * [new branch] gh/ydwu4/180/base -> origin/gh/ydwu4/180/base 2025-03-17T17:41:36.2375738Z * [new branch] gh/ydwu4/180/head -> origin/gh/ydwu4/180/head 2025-03-17T17:41:36.2376682Z * [new branch] gh/ydwu4/180/orig -> origin/gh/ydwu4/180/orig 2025-03-17T17:41:36.2378056Z * [new branch] gh/ydwu4/194/base -> origin/gh/ydwu4/194/base 2025-03-17T17:41:36.2378892Z * [new branch] gh/ydwu4/194/head -> origin/gh/ydwu4/194/head 2025-03-17T17:41:36.2379819Z * [new branch] gh/ydwu4/194/orig -> origin/gh/ydwu4/194/orig 2025-03-17T17:41:36.2381553Z * [new branch] gh/ydwu4/201/base -> origin/gh/ydwu4/201/base 2025-03-17T17:41:36.2382488Z * [new branch] gh/ydwu4/201/head -> origin/gh/ydwu4/201/head 2025-03-17T17:41:36.2383476Z * [new branch] gh/ydwu4/201/orig -> origin/gh/ydwu4/201/orig 2025-03-17T17:41:36.2385218Z * [new branch] gh/ydwu4/208/base -> origin/gh/ydwu4/208/base 2025-03-17T17:41:36.2386624Z * [new branch] gh/ydwu4/208/head -> origin/gh/ydwu4/208/head 2025-03-17T17:41:36.2387646Z * [new branch] gh/ydwu4/208/orig -> origin/gh/ydwu4/208/orig 2025-03-17T17:41:36.2388973Z * [new branch] gh/ydwu4/209/base -> origin/gh/ydwu4/209/base 2025-03-17T17:41:36.2389870Z * [new branch] gh/ydwu4/209/head -> origin/gh/ydwu4/209/head 2025-03-17T17:41:36.2390831Z * [new branch] gh/ydwu4/209/orig -> origin/gh/ydwu4/209/orig 2025-03-17T17:41:36.2392213Z * [new branch] gh/ydwu4/210/base -> origin/gh/ydwu4/210/base 2025-03-17T17:41:36.2393110Z * [new branch] gh/ydwu4/210/head -> origin/gh/ydwu4/210/head 2025-03-17T17:41:36.2394064Z * [new branch] gh/ydwu4/210/orig -> origin/gh/ydwu4/210/orig 2025-03-17T17:41:36.2395482Z * [new branch] gh/ydwu4/211/base -> origin/gh/ydwu4/211/base 2025-03-17T17:41:36.2396292Z * [new branch] gh/ydwu4/211/head -> origin/gh/ydwu4/211/head 2025-03-17T17:41:36.2397307Z * [new branch] gh/ydwu4/211/orig -> origin/gh/ydwu4/211/orig 2025-03-17T17:41:36.2398666Z * [new branch] gh/ydwu4/212/base -> origin/gh/ydwu4/212/base 2025-03-17T17:41:36.2399608Z * [new branch] gh/ydwu4/212/head -> origin/gh/ydwu4/212/head 2025-03-17T17:41:36.2400566Z * [new branch] gh/ydwu4/212/orig -> origin/gh/ydwu4/212/orig 2025-03-17T17:41:36.2402079Z * [new branch] gh/ydwu4/213/base -> origin/gh/ydwu4/213/base 2025-03-17T17:41:36.2402935Z * [new branch] gh/ydwu4/213/head -> origin/gh/ydwu4/213/head 2025-03-17T17:41:36.2403896Z * [new branch] gh/ydwu4/213/orig -> origin/gh/ydwu4/213/orig 2025-03-17T17:41:36.2405343Z * [new branch] gh/ydwu4/214/base -> origin/gh/ydwu4/214/base 2025-03-17T17:41:36.2406191Z * [new branch] gh/ydwu4/214/head -> origin/gh/ydwu4/214/head 2025-03-17T17:41:36.2407189Z * [new branch] gh/ydwu4/214/orig -> origin/gh/ydwu4/214/orig 2025-03-17T17:41:36.2408641Z * [new branch] gh/ydwu4/215/base -> origin/gh/ydwu4/215/base 2025-03-17T17:41:36.2409580Z * [new branch] gh/ydwu4/215/head -> origin/gh/ydwu4/215/head 2025-03-17T17:41:36.2410574Z * [new branch] gh/ydwu4/215/orig -> origin/gh/ydwu4/215/orig 2025-03-17T17:41:36.2412228Z * [new branch] gh/ydwu4/216/base -> origin/gh/ydwu4/216/base 2025-03-17T17:41:36.2413487Z * [new branch] gh/ydwu4/216/head -> origin/gh/ydwu4/216/head 2025-03-17T17:41:36.2414479Z * [new branch] gh/ydwu4/216/orig -> origin/gh/ydwu4/216/orig 2025-03-17T17:41:36.2415917Z * [new branch] gh/ydwu4/217/base -> origin/gh/ydwu4/217/base 2025-03-17T17:41:36.2416823Z * [new branch] gh/ydwu4/217/head -> origin/gh/ydwu4/217/head 2025-03-17T17:41:36.2417821Z * [new branch] gh/ydwu4/217/orig -> origin/gh/ydwu4/217/orig 2025-03-17T17:41:36.2419254Z * [new branch] gh/ydwu4/218/base -> origin/gh/ydwu4/218/base 2025-03-17T17:41:36.2420216Z * [new branch] gh/ydwu4/218/head -> origin/gh/ydwu4/218/head 2025-03-17T17:41:36.2421202Z * [new branch] gh/ydwu4/218/orig -> origin/gh/ydwu4/218/orig 2025-03-17T17:41:36.2423055Z * [new branch] gh/ydwu4/219/base -> origin/gh/ydwu4/219/base 2025-03-17T17:41:36.2424020Z * [new branch] gh/ydwu4/219/head -> origin/gh/ydwu4/219/head 2025-03-17T17:41:36.2425060Z * [new branch] gh/ydwu4/219/orig -> origin/gh/ydwu4/219/orig 2025-03-17T17:41:36.2427105Z * [new branch] gh/ydwu4/220/base -> origin/gh/ydwu4/220/base 2025-03-17T17:41:36.2428109Z * [new branch] gh/ydwu4/220/head -> origin/gh/ydwu4/220/head 2025-03-17T17:41:36.2429126Z * [new branch] gh/ydwu4/220/orig -> origin/gh/ydwu4/220/orig 2025-03-17T17:41:36.2430559Z * [new branch] gh/ydwu4/221/base -> origin/gh/ydwu4/221/base 2025-03-17T17:41:36.2431495Z * [new branch] gh/ydwu4/221/head -> origin/gh/ydwu4/221/head 2025-03-17T17:41:36.2432898Z * [new branch] gh/ydwu4/221/orig -> origin/gh/ydwu4/221/orig 2025-03-17T17:41:36.2434187Z * [new branch] gh/ydwu4/222/base -> origin/gh/ydwu4/222/base 2025-03-17T17:41:36.2435520Z * [new branch] gh/ydwu4/222/head -> origin/gh/ydwu4/222/head 2025-03-17T17:41:36.2437021Z * [new branch] gh/ydwu4/222/orig -> origin/gh/ydwu4/222/orig 2025-03-17T17:41:36.2438611Z * [new branch] gh/ydwu4/7/base -> origin/gh/ydwu4/7/base 2025-03-17T17:41:36.2439586Z * [new branch] gh/ydwu4/7/head -> origin/gh/ydwu4/7/head 2025-03-17T17:41:36.2440574Z * [new branch] gh/ydwu4/7/orig -> origin/gh/ydwu4/7/orig 2025-03-17T17:41:36.2442245Z * [new branch] gh/yf225/133/base -> origin/gh/yf225/133/base 2025-03-17T17:41:36.2443171Z * [new branch] gh/yf225/133/head -> origin/gh/yf225/133/head 2025-03-17T17:41:36.2444783Z * [new branch] gh/yf225/158/base -> origin/gh/yf225/158/base 2025-03-17T17:41:36.2445848Z * [new branch] gh/yf225/158/head -> origin/gh/yf225/158/head 2025-03-17T17:41:36.2446649Z * [new branch] gh/yf225/158/orig -> origin/gh/yf225/158/orig 2025-03-17T17:41:36.2448111Z * [new branch] gh/yf225/159/base -> origin/gh/yf225/159/base 2025-03-17T17:41:36.2449012Z * [new branch] gh/yf225/159/head -> origin/gh/yf225/159/head 2025-03-17T17:41:36.2449996Z * [new branch] gh/yf225/159/orig -> origin/gh/yf225/159/orig 2025-03-17T17:41:36.2452101Z * [new branch] gh/yf225/160/base -> origin/gh/yf225/160/base 2025-03-17T17:41:36.2452999Z * [new branch] gh/yf225/160/head -> origin/gh/yf225/160/head 2025-03-17T17:41:36.2454046Z * [new branch] gh/yf225/160/orig -> origin/gh/yf225/160/orig 2025-03-17T17:41:36.2455377Z * [new branch] gh/yf225/162/base -> origin/gh/yf225/162/base 2025-03-17T17:41:36.2456698Z * [new branch] gh/yf225/162/head -> origin/gh/yf225/162/head 2025-03-17T17:41:36.2457613Z * [new branch] gh/yf225/162/orig -> origin/gh/yf225/162/orig 2025-03-17T17:41:36.2458991Z * [new branch] gh/yf225/163/base -> origin/gh/yf225/163/base 2025-03-17T17:41:36.2459874Z * [new branch] gh/yf225/163/head -> origin/gh/yf225/163/head 2025-03-17T17:41:36.2460851Z * [new branch] gh/yf225/163/orig -> origin/gh/yf225/163/orig 2025-03-17T17:41:36.2462234Z * [new branch] gh/yf225/164/base -> origin/gh/yf225/164/base 2025-03-17T17:41:36.2463207Z * [new branch] gh/yf225/164/head -> origin/gh/yf225/164/head 2025-03-17T17:41:36.2464205Z * [new branch] gh/yf225/164/orig -> origin/gh/yf225/164/orig 2025-03-17T17:41:36.2465525Z * [new branch] gh/yf225/85/base -> origin/gh/yf225/85/base 2025-03-17T17:41:36.2467195Z * [new branch] gh/yf225/85/head -> origin/gh/yf225/85/head 2025-03-17T17:41:36.2468072Z * [new branch] gh/yf225/85/orig -> origin/gh/yf225/85/orig 2025-03-17T17:41:36.2469852Z * [new branch] gh/yf225/93/base -> origin/gh/yf225/93/base 2025-03-17T17:41:36.2470744Z * [new branch] gh/yf225/93/head -> origin/gh/yf225/93/head 2025-03-17T17:41:36.2472935Z * [new branch] gh/yifuwang/152/base -> origin/gh/yifuwang/152/base 2025-03-17T17:41:36.2474026Z * [new branch] gh/yifuwang/152/head -> origin/gh/yifuwang/152/head 2025-03-17T17:41:36.2475129Z * [new branch] gh/yifuwang/152/orig -> origin/gh/yifuwang/152/orig 2025-03-17T17:41:36.2476496Z * [new branch] gh/yifuwang/174/base -> origin/gh/yifuwang/174/base 2025-03-17T17:41:36.2477480Z * [new branch] gh/yifuwang/174/head -> origin/gh/yifuwang/174/head 2025-03-17T17:41:36.2478489Z * [new branch] gh/yifuwang/174/orig -> origin/gh/yifuwang/174/orig 2025-03-17T17:41:36.2479854Z * [new branch] gh/yifuwang/185/base -> origin/gh/yifuwang/185/base 2025-03-17T17:41:36.2480769Z * [new branch] gh/yifuwang/185/head -> origin/gh/yifuwang/185/head 2025-03-17T17:41:36.2481701Z * [new branch] gh/yifuwang/185/orig -> origin/gh/yifuwang/185/orig 2025-03-17T17:41:36.2482982Z * [new branch] gh/yifuwang/186/base -> origin/gh/yifuwang/186/base 2025-03-17T17:41:36.2483844Z * [new branch] gh/yifuwang/186/head -> origin/gh/yifuwang/186/head 2025-03-17T17:41:36.2484829Z * [new branch] gh/yifuwang/186/orig -> origin/gh/yifuwang/186/orig 2025-03-17T17:41:36.2486243Z * [new branch] gh/yifuwang/187/base -> origin/gh/yifuwang/187/base 2025-03-17T17:41:36.2487161Z * [new branch] gh/yifuwang/187/head -> origin/gh/yifuwang/187/head 2025-03-17T17:41:36.2488235Z * [new branch] gh/yifuwang/187/orig -> origin/gh/yifuwang/187/orig 2025-03-17T17:41:36.2489494Z * [new branch] gh/yifuwang/188/base -> origin/gh/yifuwang/188/base 2025-03-17T17:41:36.2490398Z * [new branch] gh/yifuwang/188/head -> origin/gh/yifuwang/188/head 2025-03-17T17:41:36.2491338Z * [new branch] gh/yifuwang/188/orig -> origin/gh/yifuwang/188/orig 2025-03-17T17:41:36.2492597Z * [new branch] gh/yifuwang/189/base -> origin/gh/yifuwang/189/base 2025-03-17T17:41:36.2493475Z * [new branch] gh/yifuwang/189/head -> origin/gh/yifuwang/189/head 2025-03-17T17:41:36.2494474Z * [new branch] gh/yifuwang/189/orig -> origin/gh/yifuwang/189/orig 2025-03-17T17:41:36.2495624Z * [new branch] gh/yifuwang/190/base -> origin/gh/yifuwang/190/base 2025-03-17T17:41:36.2496572Z * [new branch] gh/yifuwang/190/head -> origin/gh/yifuwang/190/head 2025-03-17T17:41:36.2497556Z * [new branch] gh/yifuwang/190/orig -> origin/gh/yifuwang/190/orig 2025-03-17T17:41:36.2498779Z * [new branch] gh/yifuwang/191/base -> origin/gh/yifuwang/191/base 2025-03-17T17:41:36.2499700Z * [new branch] gh/yifuwang/191/head -> origin/gh/yifuwang/191/head 2025-03-17T17:41:36.2500653Z * [new branch] gh/yifuwang/191/orig -> origin/gh/yifuwang/191/orig 2025-03-17T17:41:36.2501877Z * [new branch] gh/yifuwang/192/base -> origin/gh/yifuwang/192/base 2025-03-17T17:41:36.2502730Z * [new branch] gh/yifuwang/192/head -> origin/gh/yifuwang/192/head 2025-03-17T17:41:36.2504169Z * [new branch] gh/yifuwang/192/orig -> origin/gh/yifuwang/192/orig 2025-03-17T17:41:36.2505471Z * [new branch] gh/yifuwang/194/base -> origin/gh/yifuwang/194/base 2025-03-17T17:41:36.2506476Z * [new branch] gh/yifuwang/194/head -> origin/gh/yifuwang/194/head 2025-03-17T17:41:36.2509507Z * [new branch] gh/yifuwang/194/orig -> origin/gh/yifuwang/194/orig 2025-03-17T17:41:36.2509944Z * [new branch] gh/yifuwang/195/base -> origin/gh/yifuwang/195/base 2025-03-17T17:41:36.2510891Z * [new branch] gh/yifuwang/195/head -> origin/gh/yifuwang/195/head 2025-03-17T17:41:36.2511491Z * [new branch] gh/yifuwang/195/orig -> origin/gh/yifuwang/195/orig 2025-03-17T17:41:36.2512733Z * [new branch] gh/yifuwang/196/base -> origin/gh/yifuwang/196/base 2025-03-17T17:41:36.2513701Z * [new branch] gh/yifuwang/196/head -> origin/gh/yifuwang/196/head 2025-03-17T17:41:36.2514639Z * [new branch] gh/yifuwang/196/orig -> origin/gh/yifuwang/196/orig 2025-03-17T17:41:36.2516332Z * [new branch] gh/yiming0416/1/base -> origin/gh/yiming0416/1/base 2025-03-17T17:41:36.2517238Z * [new branch] gh/yiming0416/1/head -> origin/gh/yiming0416/1/head 2025-03-17T17:41:36.2518455Z * [new branch] gh/yiming0416/2/base -> origin/gh/yiming0416/2/base 2025-03-17T17:41:36.2519252Z * [new branch] gh/yiming0416/2/head -> origin/gh/yiming0416/2/head 2025-03-17T17:41:36.2520887Z * [new branch] gh/ysiraichi/78/base -> origin/gh/ysiraichi/78/base 2025-03-17T17:41:36.2521799Z * [new branch] gh/ysiraichi/78/head -> origin/gh/ysiraichi/78/head 2025-03-17T17:41:36.2522907Z * [new branch] gh/ysiraichi/78/orig -> origin/gh/ysiraichi/78/orig 2025-03-17T17:41:36.2524269Z * [new branch] gh/ysiraichi/79/base -> origin/gh/ysiraichi/79/base 2025-03-17T17:41:36.2525152Z * [new branch] gh/ysiraichi/79/head -> origin/gh/ysiraichi/79/head 2025-03-17T17:41:36.2526396Z * [new branch] gh/ysiraichi/79/orig -> origin/gh/ysiraichi/79/orig 2025-03-17T17:41:36.2527775Z * [new branch] gh/ysiraichi/80/base -> origin/gh/ysiraichi/80/base 2025-03-17T17:41:36.2528582Z * [new branch] gh/ysiraichi/80/head -> origin/gh/ysiraichi/80/head 2025-03-17T17:41:36.2529612Z * [new branch] gh/ysiraichi/80/orig -> origin/gh/ysiraichi/80/orig 2025-03-17T17:41:36.2531022Z * [new branch] gh/ysiraichi/81/base -> origin/gh/ysiraichi/81/base 2025-03-17T17:41:36.2531913Z * [new branch] gh/ysiraichi/81/head -> origin/gh/ysiraichi/81/head 2025-03-17T17:41:36.2532954Z * [new branch] gh/ysiraichi/81/orig -> origin/gh/ysiraichi/81/orig 2025-03-17T17:41:36.2534313Z * [new branch] gh/ysiraichi/82/base -> origin/gh/ysiraichi/82/base 2025-03-17T17:41:36.2535157Z * [new branch] gh/ysiraichi/82/head -> origin/gh/ysiraichi/82/head 2025-03-17T17:41:36.2536205Z * [new branch] gh/ysiraichi/82/orig -> origin/gh/ysiraichi/82/orig 2025-03-17T17:41:36.2540035Z * [new branch] gh/ysiraichi/83/base -> origin/gh/ysiraichi/83/base 2025-03-17T17:41:36.2541103Z * [new branch] gh/ysiraichi/83/head -> origin/gh/ysiraichi/83/head 2025-03-17T17:41:36.2542801Z * [new branch] gh/ysiraichi/83/orig -> origin/gh/ysiraichi/83/orig 2025-03-17T17:41:36.2544450Z * [new branch] gh/zhuhaozhe/28/base -> origin/gh/zhuhaozhe/28/base 2025-03-17T17:41:36.2545340Z * [new branch] gh/zhuhaozhe/28/head -> origin/gh/zhuhaozhe/28/head 2025-03-17T17:41:36.2546429Z * [new branch] gh/zhuhaozhe/28/orig -> origin/gh/zhuhaozhe/28/orig 2025-03-17T17:41:36.2547814Z * [new branch] gh/zhuhaozhe/29/base -> origin/gh/zhuhaozhe/29/base 2025-03-17T17:41:36.2548706Z * [new branch] gh/zhuhaozhe/29/head -> origin/gh/zhuhaozhe/29/head 2025-03-17T17:41:36.2549692Z * [new branch] gh/zhuhaozhe/29/orig -> origin/gh/zhuhaozhe/29/orig 2025-03-17T17:41:36.2551065Z * [new branch] gh/zhuhaozhe/31/base -> origin/gh/zhuhaozhe/31/base 2025-03-17T17:41:36.2551960Z * [new branch] gh/zhuhaozhe/31/head -> origin/gh/zhuhaozhe/31/head 2025-03-17T17:41:36.2552889Z * [new branch] gh/zhuhaozhe/31/orig -> origin/gh/zhuhaozhe/31/orig 2025-03-17T17:41:36.2554222Z * [new branch] gh/zhuhaozhe/32/base -> origin/gh/zhuhaozhe/32/base 2025-03-17T17:41:36.2555115Z * [new branch] gh/zhuhaozhe/32/head -> origin/gh/zhuhaozhe/32/head 2025-03-17T17:41:36.2556094Z * [new branch] gh/zhuhaozhe/32/orig -> origin/gh/zhuhaozhe/32/orig 2025-03-17T17:41:36.2557353Z * [new branch] gh/zhuhaozhe/33/base -> origin/gh/zhuhaozhe/33/base 2025-03-17T17:41:36.2558266Z * [new branch] gh/zhuhaozhe/33/head -> origin/gh/zhuhaozhe/33/head 2025-03-17T17:41:36.2559255Z * [new branch] gh/zhuhaozhe/33/orig -> origin/gh/zhuhaozhe/33/orig 2025-03-17T17:41:36.2561089Z * [new branch] gh/zou3519/1106/base -> origin/gh/zou3519/1106/base 2025-03-17T17:41:36.2562088Z * [new branch] gh/zou3519/1106/head -> origin/gh/zou3519/1106/head 2025-03-17T17:41:36.2563110Z * [new branch] gh/zou3519/1106/orig -> origin/gh/zou3519/1106/orig 2025-03-17T17:41:36.2564859Z * [new branch] gh/zou3519/1107/base -> origin/gh/zou3519/1107/base 2025-03-17T17:41:36.2565879Z * [new branch] gh/zou3519/1107/head -> origin/gh/zou3519/1107/head 2025-03-17T17:41:36.2566926Z * [new branch] gh/zou3519/1107/orig -> origin/gh/zou3519/1107/orig 2025-03-17T17:41:36.2568460Z * [new branch] gh/zou3519/1108/base -> origin/gh/zou3519/1108/base 2025-03-17T17:41:36.2569404Z * [new branch] gh/zou3519/1108/head -> origin/gh/zou3519/1108/head 2025-03-17T17:41:36.2570609Z * [new branch] gh/zou3519/1108/orig -> origin/gh/zou3519/1108/orig 2025-03-17T17:41:36.2572138Z * [new branch] gh/zou3519/1109/base -> origin/gh/zou3519/1109/base 2025-03-17T17:41:36.2573081Z * [new branch] gh/zou3519/1109/head -> origin/gh/zou3519/1109/head 2025-03-17T17:41:36.2574282Z * [new branch] gh/zou3519/1109/orig -> origin/gh/zou3519/1109/orig 2025-03-17T17:41:36.2575813Z * [new branch] gh/zou3519/1110/base -> origin/gh/zou3519/1110/base 2025-03-17T17:41:36.2577221Z * [new branch] gh/zou3519/1110/head -> origin/gh/zou3519/1110/head 2025-03-17T17:41:36.2578246Z * [new branch] gh/zou3519/1110/orig -> origin/gh/zou3519/1110/orig 2025-03-17T17:41:36.2579801Z * [new branch] gh/zou3519/1111/base -> origin/gh/zou3519/1111/base 2025-03-17T17:41:36.2580712Z * [new branch] gh/zou3519/1111/head -> origin/gh/zou3519/1111/head 2025-03-17T17:41:36.2581665Z * [new branch] gh/zou3519/1111/orig -> origin/gh/zou3519/1111/orig 2025-03-17T17:41:36.2583146Z * [new branch] gh/zou3519/1112/base -> origin/gh/zou3519/1112/base 2025-03-17T17:41:36.2584143Z * [new branch] gh/zou3519/1112/head -> origin/gh/zou3519/1112/head 2025-03-17T17:41:36.2585181Z * [new branch] gh/zou3519/1112/orig -> origin/gh/zou3519/1112/orig 2025-03-17T17:41:36.2586654Z * [new branch] gh/zou3519/1129/base -> origin/gh/zou3519/1129/base 2025-03-17T17:41:36.2587664Z * [new branch] gh/zou3519/1129/head -> origin/gh/zou3519/1129/head 2025-03-17T17:41:36.2588661Z * [new branch] gh/zou3519/1129/orig -> origin/gh/zou3519/1129/orig 2025-03-17T17:41:36.2589986Z * [new branch] gh/zou3519/1130/base -> origin/gh/zou3519/1130/base 2025-03-17T17:41:36.2590921Z * [new branch] gh/zou3519/1130/head -> origin/gh/zou3519/1130/head 2025-03-17T17:41:36.2592024Z * [new branch] gh/zou3519/1130/orig -> origin/gh/zou3519/1130/orig 2025-03-17T17:41:36.2593291Z * [new branch] gh/zou3519/1134/base -> origin/gh/zou3519/1134/base 2025-03-17T17:41:36.2594156Z * [new branch] gh/zou3519/1134/head -> origin/gh/zou3519/1134/head 2025-03-17T17:41:36.2595652Z * [new branch] gh/zou3519/1135/base -> origin/gh/zou3519/1135/base 2025-03-17T17:41:36.2596540Z * [new branch] gh/zou3519/1135/head -> origin/gh/zou3519/1135/head 2025-03-17T17:41:36.2597553Z * [new branch] gh/zou3519/1135/orig -> origin/gh/zou3519/1135/orig 2025-03-17T17:41:36.2598910Z * [new branch] gh/zou3519/1136/base -> origin/gh/zou3519/1136/base 2025-03-17T17:41:36.2599724Z * [new branch] gh/zou3519/1136/head -> origin/gh/zou3519/1136/head 2025-03-17T17:41:36.2600733Z * [new branch] gh/zou3519/1136/orig -> origin/gh/zou3519/1136/orig 2025-03-17T17:41:36.2602082Z * [new branch] gh/zou3519/1137/base -> origin/gh/zou3519/1137/base 2025-03-17T17:41:36.2603030Z * [new branch] gh/zou3519/1137/head -> origin/gh/zou3519/1137/head 2025-03-17T17:41:36.2603967Z * [new branch] gh/zou3519/1137/orig -> origin/gh/zou3519/1137/orig 2025-03-17T17:41:36.2605385Z * [new branch] gh/zou3519/1138/base -> origin/gh/zou3519/1138/base 2025-03-17T17:41:36.2606275Z * [new branch] gh/zou3519/1138/head -> origin/gh/zou3519/1138/head 2025-03-17T17:41:36.2607271Z * [new branch] gh/zou3519/1138/orig -> origin/gh/zou3519/1138/orig 2025-03-17T17:41:36.2608833Z * [new branch] gh/zou3519/1139/base -> origin/gh/zou3519/1139/base 2025-03-17T17:41:36.2609698Z * [new branch] gh/zou3519/1139/head -> origin/gh/zou3519/1139/head 2025-03-17T17:41:36.2610697Z * [new branch] gh/zou3519/1139/orig -> origin/gh/zou3519/1139/orig 2025-03-17T17:41:36.2612418Z * [new branch] gh/zou3519/1140/base -> origin/gh/zou3519/1140/base 2025-03-17T17:41:36.2613901Z * [new branch] gh/zou3519/1140/head -> origin/gh/zou3519/1140/head 2025-03-17T17:41:36.2614911Z * [new branch] gh/zou3519/1140/orig -> origin/gh/zou3519/1140/orig 2025-03-17T17:41:36.2616471Z * [new branch] gh/zou3519/1141/base -> origin/gh/zou3519/1141/base 2025-03-17T17:41:36.2617302Z * [new branch] gh/zou3519/1141/head -> origin/gh/zou3519/1141/head 2025-03-17T17:41:36.2618308Z * [new branch] gh/zou3519/1141/orig -> origin/gh/zou3519/1141/orig 2025-03-17T17:41:36.2619764Z * [new branch] gh/zou3519/1142/base -> origin/gh/zou3519/1142/base 2025-03-17T17:41:36.2620731Z * [new branch] gh/zou3519/1142/head -> origin/gh/zou3519/1142/head 2025-03-17T17:41:36.2621721Z * [new branch] gh/zou3519/1142/orig -> origin/gh/zou3519/1142/orig 2025-03-17T17:41:36.2623177Z * [new branch] gh/zou3519/1143/base -> origin/gh/zou3519/1143/base 2025-03-17T17:41:36.2624166Z * [new branch] gh/zou3519/1143/head -> origin/gh/zou3519/1143/head 2025-03-17T17:41:36.2625643Z * [new branch] gh/zou3519/1143/orig -> origin/gh/zou3519/1143/orig 2025-03-17T17:41:36.2627162Z * [new branch] gh/zou3519/1144/base -> origin/gh/zou3519/1144/base 2025-03-17T17:41:36.2628120Z * [new branch] gh/zou3519/1144/head -> origin/gh/zou3519/1144/head 2025-03-17T17:41:36.2629158Z * [new branch] gh/zou3519/1144/orig -> origin/gh/zou3519/1144/orig 2025-03-17T17:41:36.2630596Z * [new branch] gh/zou3519/1145/base -> origin/gh/zou3519/1145/base 2025-03-17T17:41:36.2635286Z * [new branch] gh/zou3519/1145/head -> origin/gh/zou3519/1145/head 2025-03-17T17:41:36.2636215Z * [new branch] gh/zou3519/1145/orig -> origin/gh/zou3519/1145/orig 2025-03-17T17:41:36.2637739Z * [new branch] gh/zou3519/1146/base -> origin/gh/zou3519/1146/base 2025-03-17T17:41:36.2638744Z * [new branch] gh/zou3519/1146/head -> origin/gh/zou3519/1146/head 2025-03-17T17:41:36.2639775Z * [new branch] gh/zou3519/1146/orig -> origin/gh/zou3519/1146/orig 2025-03-17T17:41:36.2641067Z * [new branch] gh/zou3519/1147/base -> origin/gh/zou3519/1147/base 2025-03-17T17:41:36.2642010Z * [new branch] gh/zou3519/1147/head -> origin/gh/zou3519/1147/head 2025-03-17T17:41:36.2643001Z * [new branch] gh/zou3519/1147/orig -> origin/gh/zou3519/1147/orig 2025-03-17T17:41:36.2644243Z * [new branch] gh/zou3519/1148/base -> origin/gh/zou3519/1148/base 2025-03-17T17:41:36.2645152Z * [new branch] gh/zou3519/1148/head -> origin/gh/zou3519/1148/head 2025-03-17T17:41:36.2646651Z * [new branch] gh/zou3519/1149/base -> origin/gh/zou3519/1149/base 2025-03-17T17:41:36.2647643Z * [new branch] gh/zou3519/1149/head -> origin/gh/zou3519/1149/head 2025-03-17T17:41:36.2648638Z * [new branch] gh/zou3519/1149/orig -> origin/gh/zou3519/1149/orig 2025-03-17T17:41:36.2650157Z * [new branch] gh/zou3519/754/base -> origin/gh/zou3519/754/base 2025-03-17T17:41:36.2651064Z * [new branch] gh/zou3519/754/head -> origin/gh/zou3519/754/head 2025-03-17T17:41:36.2652082Z * [new branch] gh/zou3519/754/orig -> origin/gh/zou3519/754/orig 2025-03-17T17:41:36.2653491Z * [new branch] gh/zou3519/916/base -> origin/gh/zou3519/916/base 2025-03-17T17:41:36.2654438Z * [new branch] gh/zou3519/916/head -> origin/gh/zou3519/916/head 2025-03-17T17:41:36.2655805Z * [new branch] google-main -> origin/google-main 2025-03-17T17:41:36.2657061Z * [new branch] guangyey/external_stream -> origin/guangyey/external_stream 2025-03-17T17:41:36.2657876Z * [new branch] guangyey/host_alloc -> origin/guangyey/host_alloc 2025-03-17T17:41:36.2658744Z * [new branch] guangyey/test_2025 -> origin/guangyey/test_2025 2025-03-17T17:41:36.2659667Z * [new branch] guard_system -> origin/guard_system 2025-03-17T17:41:36.2661424Z * [new branch] guilhermeleobas/cherry-pick-55d87d9dfd9 -> origin/guilhermeleobas/cherry-pick-55d87d9dfd9 2025-03-17T17:41:36.2662376Z * [new branch] haozhe/bf16-dynamic-shape -> origin/haozhe/bf16-dynamic-shape 2025-03-17T17:41:36.2663200Z * [new branch] hhh_rand -> origin/hhh_rand 2025-03-17T17:41:36.2664214Z * [new branch] hoy-update-wheel -> origin/hoy-update-wheel 2025-03-17T17:41:36.2665810Z * [new branch] hoy/autofdo/xblock -> origin/hoy/autofdo/xblock 2025-03-17T17:41:36.2667109Z * [new branch] hoy/autotune/nreg -> origin/hoy/autotune/nreg 2025-03-17T17:41:36.2668183Z * [new branch] hoy/autotune/numwarps -> origin/hoy/autotune/numwarps 2025-03-17T17:41:36.2668993Z * [new branch] hoy/mmsplitk -> origin/hoy/mmsplitk 2025-03-17T17:41:36.2669970Z * [new branch] hoy/triton-PR3973 -> origin/hoy/triton-PR3973 2025-03-17T17:41:36.2671130Z * [new branch] hoy/triton-coalescing-baseline -> origin/hoy/triton-coalescing-baseline 2025-03-17T17:41:36.2671965Z * [new branch] hoy/triton-coalescing-min -> origin/hoy/triton-coalescing-min 2025-03-17T17:41:36.2673308Z * [new branch] hoy/triton-coalescing-new -> origin/hoy/triton-coalescing-new 2025-03-17T17:41:36.2674535Z * [new branch] hoy/triton-coalescing-vec -> origin/hoy/triton-coalescing-vec 2025-03-17T17:41:36.2675459Z * [new branch] improve_vec_log -> origin/improve_vec_log 2025-03-17T17:41:36.2676735Z * [new branch] inductor_layout_opt_rocm_disable -> origin/inductor_layout_opt_rocm_disable 2025-03-17T17:41:36.2677511Z * [new branch] inline -> origin/inline 2025-03-17T17:41:36.2678461Z * [new branch] inlining -> origin/inlining 2025-03-17T17:41:36.2679524Z * [new branch] inlining-ezyang -> origin/inlining-ezyang 2025-03-17T17:41:36.2680436Z * [new branch] int8_sdpa -> origin/int8_sdpa 2025-03-17T17:41:36.2681470Z * [new branch] int8_sdpa_template -> origin/int8_sdpa_template 2025-03-17T17:41:36.2682493Z * [new branch] invoke-subgraph -> origin/invoke-subgraph 2025-03-17T17:41:36.2683404Z * [new branch] ios-mac-m1 -> origin/ios-mac-m1 2025-03-17T17:41:36.2684754Z * [new branch] ipiszy/fix -> origin/ipiszy/fix 2025-03-17T17:41:36.2685599Z * [new branch] ipiszy/fp8_test -> origin/ipiszy/fp8_test 2025-03-17T17:41:36.2686497Z * [new branch] ipiszy/mypy -> origin/ipiszy/mypy 2025-03-17T17:41:36.2687513Z * [new branch] issue#58739 -> origin/issue#58739 2025-03-17T17:41:36.2689173Z * [new branch] ivanov/cherry-pick-ckpt-fixes -> origin/ivanov/cherry-pick-ckpt-fixes 2025-03-17T17:41:36.2690053Z * [new branch] jataylo-nvfuser_blocklist -> origin/jataylo-nvfuser_blocklist 2025-03-17T17:41:36.2691715Z * [new branch] jcaip/test-cusparselt-version-0.6.2 -> origin/jcaip/test-cusparselt-version-0.6.2 2025-03-17T17:41:36.2692428Z * [new branch] jcaip/torch-compile-sparse -> origin/jcaip/torch-compile-sparse 2025-03-17T17:41:36.2693625Z * [new branch] jcaip/update-benchmarks -> origin/jcaip/update-benchmarks 2025-03-17T17:41:36.2694540Z * [new branch] jcaip/update-cusparselt-0.6.2 -> origin/jcaip/update-cusparselt-0.6.2 2025-03-17T17:41:36.2695836Z * [new branch] jeanschmidt/manywheel_memory -> origin/jeanschmidt/manywheel_memory 2025-03-17T17:41:36.2696797Z * [new branch] jeanschmidt/pull_ephemeral_runners -> origin/jeanschmidt/pull_ephemeral_runners 2025-03-17T17:41:36.2697640Z * [new branch] jeanschmidt/test_infra_250314 -> origin/jeanschmidt/test_infra_250314 2025-03-17T17:41:36.2699159Z * [new branch] jnair/mi300_docker_caching_workflow -> origin/jnair/mi300_docker_caching_workflow 2025-03-17T17:41:36.2700292Z * [new branch] jon-chuang/compile-config-hash -> origin/jon-chuang/compile-config-hash 2025-03-17T17:41:36.2701149Z * [new branch] jon-chuang/compile-ignored -> origin/jon-chuang/compile-ignored 2025-03-17T17:41:36.2702595Z * [new branch] justinchu/onnxscript-0.2.2 -> origin/justinchu/onnxscript-0.2.2 2025-03-17T17:41:36.2703469Z * [new branch] justinchu/redundant-move -> origin/justinchu/redundant-move 2025-03-17T17:41:36.2704357Z * [new branch] justinchu/retrace-jit -> origin/justinchu/retrace-jit 2025-03-17T17:41:36.2705621Z * [new branch] justinchuby-patch-1 -> origin/justinchuby-patch-1 2025-03-17T17:41:36.2707193Z * [new branch] jwagantall/migrate-checkout -> origin/jwagantall/migrate-checkout 2025-03-17T17:41:36.2708281Z * [new branch] jz/istft -> origin/jz/istft 2025-03-17T17:41:36.2709278Z * [new branch] jz/stft-old-fc -> origin/jz/stft-old-fc 2025-03-17T17:41:36.2710631Z * [new branch] kadeng/dev-1 -> origin/kadeng/dev-1 2025-03-17T17:41:36.2712501Z * [new branch] kadeng/inductor-backend/cutlass-evt-fusion-1 -> origin/kadeng/inductor-backend/cutlass-evt-fusion-1 2025-03-17T17:41:36.2713349Z * [new branch] kadeng/inductor-cutlass-epilogue -> origin/kadeng/inductor-cutlass-epilogue 2025-03-17T17:41:36.2714621Z * [new branch] kenjin/call_method_userdefined -> origin/kenjin/call_method_userdefined 2025-03-17T17:41:36.2715365Z * [new branch] kenjin/lambdas -> origin/kenjin/lambdas 2025-03-17T17:41:36.2716313Z * [new branch] kenjin/norefcycles -> origin/kenjin/norefcycles 2025-03-17T17:41:36.2717346Z * [new branch] kit1980-patch-2 -> origin/kit1980-patch-2 2025-03-17T17:41:36.2718433Z * [new branch] kleidiai_bf16_issue_fix -> origin/kleidiai_bf16_issue_fix 2025-03-17T17:41:36.2719533Z * [new branch] kleidiai_submodule_update -> origin/kleidiai_submodule_update 2025-03-17T17:41:36.2720557Z * [new branch] larryliu0820-patch-1 -> origin/larryliu0820-patch-1 2025-03-17T17:41:36.2722155Z * [new branch] leslie/enable_poc_reduction_fusion -> origin/leslie/enable_poc_reduction_fusion 2025-03-17T17:41:36.2722973Z * [new branch] leslie/test_group_gemm_epilogues -> origin/leslie/test_group_gemm_epilogues 2025-03-17T17:41:36.2724446Z * [new branch] lts/release/1.8 -> origin/lts/release/1.8 2025-03-17T17:41:36.2725465Z * [new branch] main -> origin/main 2025-03-17T17:41:36.2726507Z * [new branch] main_dev_hhh -> origin/main_dev_hhh 2025-03-17T17:41:36.2727706Z * [new branch] malfet-patch-1 -> origin/malfet-patch-1 2025-03-17T17:41:36.2729093Z * [new branch] malfet-patch-10 -> origin/malfet-patch-10 2025-03-17T17:41:36.2730156Z * [new branch] malfet-patch-19 -> origin/malfet-patch-19 2025-03-17T17:41:36.2731213Z * [new branch] malfet-patch-2 -> origin/malfet-patch-2 2025-03-17T17:41:36.2732390Z * [new branch] malfet-patch-23 -> origin/malfet-patch-23 2025-03-17T17:41:36.2733289Z * [new branch] malfet-patch-3 -> origin/malfet-patch-3 2025-03-17T17:41:36.2734312Z * [new branch] malfet-patch-32 -> origin/malfet-patch-32 2025-03-17T17:41:36.2735390Z * [new branch] malfet-patch-4 -> origin/malfet-patch-4 2025-03-17T17:41:36.2736417Z * [new branch] malfet-patch-42 -> origin/malfet-patch-42 2025-03-17T17:41:36.2737787Z * [new branch] malfet-patch-5 -> origin/malfet-patch-5 2025-03-17T17:41:36.2738802Z * [new branch] malfet-patch-6 -> origin/malfet-patch-6 2025-03-17T17:41:36.2739870Z * [new branch] malfet-patch-8 -> origin/malfet-patch-8 2025-03-17T17:41:36.2741449Z * [new branch] malfet/add-benchmark-func -> origin/malfet/add-benchmark-func 2025-03-17T17:41:36.2742379Z * [new branch] malfet/delete-find-openmp -> origin/malfet/delete-find-openmp 2025-03-17T17:41:36.2743451Z * [new branch] malfet/mps-fix-rand-5d -> origin/malfet/mps-fix-rand-5d 2025-03-17T17:41:36.2744461Z * [new branch] malfet/mps-fix-strided-logic -> origin/malfet/mps-fix-strided-logic 2025-03-17T17:41:36.2745341Z * [new branch] malfet/mps-implement-col2im -> origin/malfet/mps-implement-col2im 2025-03-17T17:41:36.2746306Z * [new branch] maxautotune_big_gpu -> origin/maxautotune_big_gpu 2025-03-17T17:41:36.2747298Z * [new branch] mem-leak -> origin/mem-leak 2025-03-17T17:41:36.2748218Z * [new branch] mem-leak1 -> origin/mem-leak1 2025-03-17T17:41:36.2749409Z * [new branch] migrate_map -> origin/migrate_map 2025-03-17T17:41:36.2750478Z * [new branch] missing_gloo_causes_deadlock -> origin/missing_gloo_causes_deadlock 2025-03-17T17:41:36.2751829Z * [new branch] mlazos/S429861-debug -> origin/mlazos/S429861-debug 2025-03-17T17:41:36.2752551Z * [new branch] mlazos/aa -> origin/mlazos/aa 2025-03-17T17:41:36.2753624Z * [new branch] mlazos/adam-compiled -> origin/mlazos/adam-compiled 2025-03-17T17:41:36.2754623Z * [new branch] mlazos/adam-fused-bench -> origin/mlazos/adam-fused-bench 2025-03-17T17:41:36.2755569Z * [new branch] mlazos/adam-fused-bench2 -> origin/mlazos/adam-fused-bench2 2025-03-17T17:41:36.2756897Z * [new branch] mlazos/adam-test2 -> origin/mlazos/adam-test2 2025-03-17T17:41:36.2757729Z * [new branch] mlazos/aux-vars -> origin/mlazos/aux-vars 2025-03-17T17:41:36.2759103Z * [new branch] mlazos/backup-test-branch -> origin/mlazos/backup-test-branch 2025-03-17T17:41:36.2760154Z * [new branch] mlazos/bad-cudagraphs -> origin/mlazos/bad-cudagraphs 2025-03-17T17:41:36.2761442Z * [new branch] mlazos/baseline -> origin/mlazos/baseline 2025-03-17T17:41:36.2762525Z * [new branch] mlazos/baseline-graph-breaks -> origin/mlazos/baseline-graph-breaks 2025-03-17T17:41:36.2763410Z * [new branch] mlazos/batch-fuse-opt -> origin/mlazos/batch-fuse-opt 2025-03-17T17:41:36.2764354Z * [new branch] mlazos/beta-tensor -> origin/mlazos/beta-tensor 2025-03-17T17:41:36.2765304Z * [new branch] mlazos/buff-opt2 -> origin/mlazos/buff-opt2 2025-03-17T17:41:36.2766257Z * [new branch] mlazos/buffers -> origin/mlazos/buffers 2025-03-17T17:41:36.2767105Z * [new branch] mlazos/buffers2 -> origin/mlazos/buffers2 2025-03-17T17:41:36.2768112Z * [new branch] mlazos/buffers3 -> origin/mlazos/buffers3 2025-03-17T17:41:36.2769721Z * [new branch] mlazos/ck2 -> origin/mlazos/ck2 2025-03-17T17:41:36.2770736Z * [new branch] mlazos/combokernels -> origin/mlazos/combokernels 2025-03-17T17:41:36.2771673Z * [new branch] mlazos/compiled-nadam -> origin/mlazos/compiled-nadam 2025-03-17T17:41:36.2772571Z * [new branch] mlazos/concat2 -> origin/mlazos/concat2 2025-03-17T17:41:36.2773539Z * [new branch] mlazos/copy2 -> origin/mlazos/copy2 2025-03-17T17:41:36.2774896Z * [new branch] mlazos/cudagraph-tests -> origin/mlazos/cudagraph-tests 2025-03-17T17:41:36.2775859Z * [new branch] mlazos/cudagraphs-measurement -> origin/mlazos/cudagraphs-measurement 2025-03-17T17:41:36.2776695Z * [new branch] mlazos/data-gather -> origin/mlazos/data-gather 2025-03-17T17:41:36.2777673Z * [new branch] mlazos/data-ptrs2 -> origin/mlazos/data-ptrs2 2025-03-17T17:41:36.2778641Z * [new branch] mlazos/data-ptrs3 -> origin/mlazos/data-ptrs3 2025-03-17T17:41:36.2779682Z * [new branch] mlazos/dataclass-proxy -> origin/mlazos/dataclass-proxy 2025-03-17T17:41:36.2780746Z * [new branch] mlazos/disable-closures -> origin/mlazos/disable-closures 2025-03-17T17:41:36.2781673Z * [new branch] mlazos/disabled-opt -> origin/mlazos/disabled-opt 2025-03-17T17:41:36.2782564Z * [new branch] mlazos/evt -> origin/mlazos/evt 2025-03-17T17:41:36.2783700Z * [new branch] mlazos/exp_disable -> origin/mlazos/exp_disable 2025-03-17T17:41:36.2784608Z * [new branch] mlazos/faster -> origin/mlazos/faster 2025-03-17T17:41:36.2785611Z * [new branch] mlazos/faster2 -> origin/mlazos/faster2 2025-03-17T17:41:36.2786830Z * [new branch] mlazos/fe-copy -> origin/mlazos/fe-copy 2025-03-17T17:41:36.2787787Z * [new branch] mlazos/foreach-op -> origin/mlazos/foreach-op 2025-03-17T17:41:36.2788832Z * [new branch] mlazos/foreach-reds -> origin/mlazos/foreach-reds 2025-03-17T17:41:36.2789727Z * [new branch] mlazos/freezing -> origin/mlazos/freezing 2025-03-17T17:41:36.2790770Z * [new branch] mlazos/gen-foreach -> origin/mlazos/gen-foreach 2025-03-17T17:41:36.2791631Z * [new branch] mlazos/h-comp -> origin/mlazos/h-comp 2025-03-17T17:41:36.2792618Z * [new branch] mlazos/h-comp2 -> origin/mlazos/h-comp2 2025-03-17T17:41:36.2793572Z * [new branch] mlazos/hc-hf -> origin/mlazos/hc-hf 2025-03-17T17:41:36.2794668Z * [new branch] mlazos/init-per-param -> origin/mlazos/init-per-param 2025-03-17T17:41:36.2795633Z * [new branch] mlazos/init_per_param -> origin/mlazos/init_per_param 2025-03-17T17:41:36.2796608Z * [new branch] mlazos/less-guards -> origin/mlazos/less-guards 2025-03-17T17:41:36.2797675Z * [new branch] mlazos/lr-composibility -> origin/mlazos/lr-composibility 2025-03-17T17:41:36.2798707Z * [new branch] mlazos/main-test-enablement -> origin/mlazos/main-test-enablement 2025-03-17T17:41:36.2799518Z * [new branch] mlazos/main2 -> origin/mlazos/main2 2025-03-17T17:41:36.2800844Z * [new branch] mlazos/main_test -> origin/mlazos/main_test 2025-03-17T17:41:36.2801719Z * [new branch] mlazos/mcg -> origin/mlazos/mcg 2025-03-17T17:41:36.2802736Z * [new branch] mlazos/mcg2 -> origin/mlazos/mcg2 2025-03-17T17:41:36.2803808Z * [new branch] mlazos/meta-guards -> origin/mlazos/meta-guards 2025-03-17T17:41:36.2805047Z * [new branch] mlazos/mlazos/ck2 -> origin/mlazos/mlazos/ck2 2025-03-17T17:41:36.2805976Z * [new branch] mlazos/mlazos/clean -> origin/mlazos/mlazos/clean 2025-03-17T17:41:36.2806870Z * [new branch] mlazos/mlazos/faster2 -> origin/mlazos/mlazos/faster2 2025-03-17T17:41:36.2807964Z * [new branch] mlazos/mlazos/foreach-map-adam -> origin/mlazos/mlazos/foreach-map-adam 2025-03-17T17:41:36.2808867Z * [new branch] mlazos/mlazos/subclass-test -> origin/mlazos/mlazos/subclass-test 2025-03-17T17:41:36.2809864Z * [new branch] mlazos/mlazos/tf-mode-backup -> origin/mlazos/mlazos/tf-mode-backup 2025-03-17T17:41:36.2810802Z * [new branch] mlazos/mlazos/tf-trace-full -> origin/mlazos/mlazos/tf-trace-full 2025-03-17T17:41:36.2811653Z * [new branch] mlazos/mod-fix -> origin/mlazos/mod-fix 2025-03-17T17:41:36.2812743Z * [new branch] mlazos/more-tests -> origin/mlazos/more-tests 2025-03-17T17:41:36.2813724Z * [new branch] mlazos/mutable-backup -> origin/mlazos/mutable-backup 2025-03-17T17:41:36.2814632Z * [new branch] mlazos/mv-tfo -> origin/mlazos/mv-tfo 2025-03-17T17:41:36.2816078Z * [new branch] mlazos/no-cpp -> origin/mlazos/no-cpp 2025-03-17T17:41:36.2817213Z * [new branch] mlazos/no-init-group-handling -> origin/mlazos/no-init-group-handling 2025-03-17T17:41:36.2818117Z * [new branch] mlazos/op-investigation -> origin/mlazos/op-investigation 2025-03-17T17:41:36.2819056Z * [new branch] mlazos/opt-bench-exp2 -> origin/mlazos/opt-bench-exp2 2025-03-17T17:41:36.2819974Z * [new branch] mlazos/opt-bench2 -> origin/mlazos/opt-bench2 2025-03-17T17:41:36.2820908Z * [new branch] mlazos/opt-bench3 -> origin/mlazos/opt-bench3 2025-03-17T17:41:36.2821854Z * [new branch] mlazos/opt-incr -> origin/mlazos/opt-incr 2025-03-17T17:41:36.2823105Z * [new branch] mlazos/opt-recipe -> origin/mlazos/opt-recipe 2025-03-17T17:41:36.2823950Z * [new branch] mlazos/opt-slowdown -> origin/mlazos/opt-slowdown 2025-03-17T17:41:36.2824906Z * [new branch] mlazos/proxy-ctors -> origin/mlazos/proxy-ctors 2025-03-17T17:41:36.2825895Z * [new branch] mlazos/proxy-opt -> origin/mlazos/proxy-opt 2025-03-17T17:41:36.2826850Z * [new branch] mlazos/pt -> origin/mlazos/pt 2025-03-17T17:41:36.2827911Z * [new branch] mlazos/restart -> origin/mlazos/restart 2025-03-17T17:41:36.2829286Z * [new branch] mlazos/rtp -> origin/mlazos/rtp 2025-03-17T17:41:36.2830254Z * [new branch] mlazos/sdpa-driss -> origin/mlazos/sdpa-driss 2025-03-17T17:41:36.2831364Z * [new branch] mlazos/static-inputs-log -> origin/mlazos/static-inputs-log 2025-03-17T17:41:36.2832173Z * [new branch] mlazos/subclass-test -> origin/mlazos/subclass-test 2025-03-17T17:41:36.2833199Z * [new branch] mlazos/td-fix2 -> origin/mlazos/td-fix2 2025-03-17T17:41:36.2834297Z * [new branch] mlazos/tensor-hasattr2 -> origin/mlazos/tensor-hasattr2 2025-03-17T17:41:36.2835252Z * [new branch] mlazos/tensor-inherit-backup -> origin/mlazos/tensor-inherit-backup 2025-03-17T17:41:36.2836093Z * [new branch] mlazos/tensor-like-fix -> origin/mlazos/tensor-like-fix 2025-03-17T17:41:36.2837116Z * [new branch] mlazos/tensor-lr -> origin/mlazos/tensor-lr 2025-03-17T17:41:36.2838754Z * [new branch] mlazos/tensor-lr2 -> origin/mlazos/tensor-lr2 2025-03-17T17:41:36.2839497Z * [new branch] mlazos/tf-inherit -> origin/mlazos/tf-inherit 2025-03-17T17:41:36.2840501Z * [new branch] mlazos/tf-mode -> origin/mlazos/tf-mode 2025-03-17T17:41:36.2841769Z * [new branch] mlazos/tf-mode-backup2 -> origin/mlazos/tf-mode-backup2 2025-03-17T17:41:36.2842631Z * [new branch] mlazos/tf-mode-reland -> origin/mlazos/tf-mode-reland 2025-03-17T17:41:36.2843881Z * [new branch] mlazos/tf-mode-reland2 -> origin/mlazos/tf-mode-reland2 2025-03-17T17:41:36.2844807Z * [new branch] mlazos/tf-mode-reland3 -> origin/mlazos/tf-mode-reland3 2025-03-17T17:41:36.2845731Z * [new branch] mlazos/tf-refactor -> origin/mlazos/tf-refactor 2025-03-17T17:41:36.2846827Z * [new branch] mlazos/tf-subclass-stack -> origin/mlazos/tf-subclass-stack 2025-03-17T17:41:36.2847715Z * [new branch] mlazos/tf-trace-full -> origin/mlazos/tf-trace-full 2025-03-17T17:41:36.2848588Z * [new branch] mlazos/th -> origin/mlazos/th 2025-03-17T17:41:36.2849647Z * [new branch] mlazos/tune-proto -> origin/mlazos/tune-proto 2025-03-17T17:41:36.2850609Z * [new branch] mlazos/vary-beta -> origin/mlazos/vary-beta 2025-03-17T17:41:36.2851682Z * [new branch] mlazos/vary-beta2 -> origin/mlazos/vary-beta2 2025-03-17T17:41:36.2852656Z * [new branch] mlazos/weird-perf1 -> origin/mlazos/weird-perf1 2025-03-17T17:41:36.2853802Z * [new branch] mod_guards1 -> origin/mod_guards1 2025-03-17T17:41:36.2854762Z * [new branch] mod_guards3 -> origin/mod_guards3 2025-03-17T17:41:36.2855856Z * [new branch] moderniz29_cyy -> origin/moderniz29_cyy 2025-03-17T17:41:36.2856991Z * [new branch] mps-linear-1d -> origin/mps-linear-1d 2025-03-17T17:41:36.2858344Z * [new branch] mradmila/host_stats -> origin/mradmila/host_stats 2025-03-17T17:41:36.2859411Z * [new branch] msaroufim-patch-10 -> origin/msaroufim-patch-10 2025-03-17T17:41:36.2860484Z * [new branch] msaroufim-patch-11 -> origin/msaroufim-patch-11 2025-03-17T17:41:36.2861613Z * [new branch] msaroufim-patch-12 -> origin/msaroufim-patch-12 2025-03-17T17:41:36.2862824Z * [new branch] msaroufim-patch-13 -> origin/msaroufim-patch-13 2025-03-17T17:41:36.2864310Z * [new branch] msaroufim-patch-14 -> origin/msaroufim-patch-14 2025-03-17T17:41:36.2865520Z * [new branch] msaroufim/cache -> origin/msaroufim/cache 2025-03-17T17:41:36.2866646Z * [new branch] msaroufim/dtensorfusedadam -> origin/msaroufim/dtensorfusedadam 2025-03-17T17:41:36.2867502Z * [new branch] msaroufim/warn_once -> origin/msaroufim/warn_once 2025-03-17T17:41:36.2868454Z * [new branch] mypy_fix -> origin/mypy_fix 2025-03-17T17:41:36.2869718Z * [new branch] myst_nb_trial -> origin/myst_nb_trial 2025-03-17T17:41:36.2870820Z * [new branch] nWEIdia-patch-1 -> origin/nWEIdia-patch-1 2025-03-17T17:41:36.2871872Z * [new branch] nestedfairseq2ops1 -> origin/nestedfairseq2ops1 2025-03-17T17:41:36.2872836Z * [new branch] new-batch-norm -> origin/new-batch-norm 2025-03-17T17:41:36.2873877Z * [new branch] new_guard_system -> origin/new_guard_system 2025-03-17T17:41:36.2875242Z * [new branch] ngimel/bits -> origin/ngimel/bits 2025-03-17T17:41:36.2876234Z * [new branch] ngimel/copy2d -> origin/ngimel/copy2d 2025-03-17T17:41:36.2877096Z * [new branch] ngimel/gg -> origin/ngimel/gg 2025-03-17T17:41:36.2878019Z * [new branch] ngimel/gg_new -> origin/ngimel/gg_new 2025-03-17T17:41:36.2878998Z * [new branch] nightly -> origin/nightly 2025-03-17T17:41:36.2880574Z * [new branch] nikitaved/solve_doc_update -> origin/nikitaved/solve_doc_update 2025-03-17T17:41:36.2881507Z * [new branch] nikitaved/tensordot -> origin/nikitaved/tensordot 2025-03-17T17:41:36.2882465Z * [new branch] offline -> origin/offline 2025-03-17T17:41:36.2883779Z * [new branch] openblas_gemv -> origin/openblas_gemv 2025-03-17T17:41:36.2885466Z * [new branch] orig/release/1.10 -> origin/orig/release/1.10 2025-03-17T17:41:36.2886446Z * [new branch] orig/release/1.11 -> origin/orig/release/1.11 2025-03-17T17:41:36.2887439Z * [new branch] orig/release/1.12 -> origin/orig/release/1.12 2025-03-17T17:41:36.2888681Z * [new branch] orig/release/1.13 -> origin/orig/release/1.13 2025-03-17T17:41:36.2889691Z * [new branch] orig/release/1.6 -> origin/orig/release/1.6 2025-03-17T17:41:36.2890938Z * [new branch] orig/release/1.7 -> origin/orig/release/1.7 2025-03-17T17:41:36.2891874Z * [new branch] orig/release/1.8 -> origin/orig/release/1.8 2025-03-17T17:41:36.2892921Z * [new branch] orig/release/1.9 -> origin/orig/release/1.9 2025-03-17T17:41:36.2893889Z * [new branch] orig/release/2.0 -> origin/orig/release/2.0 2025-03-17T17:41:36.2894929Z * [new branch] orig/release/2.1 -> origin/orig/release/2.1 2025-03-17T17:41:36.2895896Z * [new branch] orig/release/2.2 -> origin/orig/release/2.2 2025-03-17T17:41:36.2896860Z * [new branch] orig/release/2.3 -> origin/orig/release/2.3 2025-03-17T17:41:36.2897871Z * [new branch] orig/release/2.4 -> origin/orig/release/2.4 2025-03-17T17:41:36.2898801Z * [new branch] orig/release/2.5 -> origin/orig/release/2.5 2025-03-17T17:41:36.2899769Z * [new branch] orig/release/2.6 -> origin/orig/release/2.6 2025-03-17T17:41:36.2900841Z * [new branch] orig/release/2.7 -> origin/orig/release/2.7 2025-03-17T17:41:36.2903137Z * [new branch] origin/gh/stroxler/1/head -> origin/origin/gh/stroxler/1/head 2025-03-17T17:41:36.2904287Z * [new branch] origin/voz/serde -> origin/origin/voz/serde 2025-03-17T17:41:36.2905632Z * [new branch] oulgen/fx_graph -> origin/oulgen/fx_graph 2025-03-17T17:41:36.2906638Z * [new branch] padded-tensor -> origin/padded-tensor 2025-03-17T17:41:36.2907858Z * [new branch] palic_hotfix -> origin/palic_hotfix 2025-03-17T17:41:36.2909266Z * [new branch] parallel_cat -> origin/parallel_cat 2025-03-17T17:41:36.2910239Z * [new branch] parallel_reduce -> origin/parallel_reduce 2025-03-17T17:41:36.2911350Z * [new branch] pca2 -> origin/pca2 2025-03-17T17:41:36.2912894Z * [new branch] pianpwk/backed_size_oblivious -> origin/pianpwk/backed_size_oblivious 2025-03-17T17:41:36.2913851Z * [new branch] pianpwk/backed_size_oblivious_global -> origin/pianpwk/backed_size_oblivious_global 2025-03-17T17:41:36.2914691Z * [new branch] pianpwk/backed_symint_endofbounds -> origin/pianpwk/backed_symint_endofbounds 2025-03-17T17:41:36.2915619Z * [new branch] pianpwk/clear_pending_unbacked -> origin/pianpwk/clear_pending_unbacked 2025-03-17T17:41:36.2916479Z * [new branch] pianpwk/draft_strict_stack -> origin/pianpwk/draft_strict_stack 2025-03-17T17:41:36.2917716Z * [new branch] pianpwk/inductor_unbacked_symint -> origin/pianpwk/inductor_unbacked_symint 2025-03-17T17:41:36.2919326Z * [new branch] pianpwk/pre_forward_hook -> origin/pianpwk/pre_forward_hook 2025-03-17T17:41:36.2920298Z * [new branch] pianpwk/should_swap_oblivious -> origin/pianpwk/should_swap_oblivious 2025-03-17T17:41:36.2921360Z * [new branch] pianpwk/symbol_provenance_v1 -> origin/pianpwk/symbol_provenance_v1 2025-03-17T17:41:36.2922322Z * [new branch] pianpwk/torchbench_combine_args -> origin/pianpwk/torchbench_combine_args 2025-03-17T17:41:36.2923270Z * [new branch] pianpwk/treat_sizes_as_size_like -> origin/pianpwk/treat_sizes_as_size_like 2025-03-17T17:41:36.2924233Z * [new branch] pianpwk/unbacked_bindings -> origin/pianpwk/unbacked_bindings 2025-03-17T17:41:36.2925293Z * [new branch] plain-metal-mul-kernel -> origin/plain-metal-mul-kernel 2025-03-17T17:41:36.2926235Z * [new branch] polyfill-class -> origin/polyfill-class 2025-03-17T17:41:36.2927696Z * [new branch] pr/131860 -> origin/pr/131860 2025-03-17T17:41:36.2928736Z * [new branch] pr149164 -> origin/pr149164 2025-03-17T17:41:36.2930142Z * [new branch] prepare-android-artifacts -> origin/prepare-android-artifacts 2025-03-17T17:41:36.2931196Z * [new branch] print_hostname_rocm_runners -> origin/print_hostname_rocm_runners 2025-03-17T17:41:36.2932098Z * [new branch] pt-debug-cpu0 -> origin/pt-debug-cpu0 2025-03-17T17:41:36.2933135Z * [new branch] pt-opt-cuda3 -> origin/pt-opt-cuda3 2025-03-17T17:41:36.2934656Z * [new branch] python_compiled_autograd -> origin/python_compiled_autograd 2025-03-17T17:41:36.2935594Z * [new branch] qat-conv-bn-1d -> origin/qat-conv-bn-1d 2025-03-17T17:41:36.2936687Z * [new branch] qat-remove-bias-temp -> origin/qat-remove-bias-temp 2025-03-17T17:41:36.2937973Z * [new branch] qat_cudnn_batchnorm -> origin/qat_cudnn_batchnorm 2025-03-17T17:41:36.2939125Z * [new branch] qat_preserve_source_fn_stack -> origin/qat_preserve_source_fn_stack 2025-03-17T17:41:36.2940909Z * [new branch] qchip/export-D54134695 -> origin/qchip/export-D54134695 2025-03-17T17:41:36.2941788Z * [new branch] raggedsdpa -> origin/raggedsdpa 2025-03-17T17:41:36.2942914Z * [new branch] reenable-sgd-benchmark -> origin/reenable-sgd-benchmark 2025-03-17T17:41:36.2943933Z * [new branch] refactor-adamw -> origin/refactor-adamw 2025-03-17T17:41:36.2945428Z * [new branch] release/1.10 -> origin/release/1.10 2025-03-17T17:41:36.2946445Z * [new branch] release/1.11 -> origin/release/1.11 2025-03-17T17:41:36.2947659Z * [new branch] release/1.12 -> origin/release/1.12 2025-03-17T17:41:36.2948597Z * [new branch] release/1.13 -> origin/release/1.13 2025-03-17T17:41:36.2949525Z * [new branch] release/1.4 -> origin/release/1.4 2025-03-17T17:41:36.2950353Z * [new branch] release/1.4.1 -> origin/release/1.4.1 2025-03-17T17:41:36.2951343Z * [new branch] release/1.5 -> origin/release/1.5 2025-03-17T17:41:36.2952380Z * [new branch] release/1.6 -> origin/release/1.6 2025-03-17T17:41:36.2953411Z * [new branch] release/1.7 -> origin/release/1.7 2025-03-17T17:41:36.2954991Z * [new branch] release/1.8 -> origin/release/1.8 2025-03-17T17:41:36.2955909Z * [new branch] release/1.9 -> origin/release/1.9 2025-03-17T17:41:36.2956915Z * [new branch] release/2.0 -> origin/release/2.0 2025-03-17T17:41:36.2958180Z * [new branch] release/2.1 -> origin/release/2.1 2025-03-17T17:41:36.2959300Z * [new branch] release/2.2 -> origin/release/2.2 2025-03-17T17:41:36.2960630Z * [new branch] release/2.3 -> origin/release/2.3 2025-03-17T17:41:36.2962140Z * [new branch] release/2.4 -> origin/release/2.4 2025-03-17T17:41:36.2963422Z * [new branch] release/2.5 -> origin/release/2.5 2025-03-17T17:41:36.2964454Z * [new branch] release/2.6 -> origin/release/2.6 2025-03-17T17:41:36.2965608Z * [new branch] release/2.7 -> origin/release/2.7 2025-03-17T17:41:36.2966656Z * [new branch] release_notes -> origin/release_notes 2025-03-17T17:41:36.2967910Z * [new branch] remove-edit-on-github -> origin/remove-edit-on-github 2025-03-17T17:41:36.2968947Z * [new branch] remove-link-survey -> origin/remove-link-survey 2025-03-17T17:41:36.2970102Z * [new branch] remove_global_ns -> origin/remove_global_ns 2025-03-17T17:41:36.2971123Z * [new branch] requires_grad_fix -> origin/requires_grad_fix 2025-03-17T17:41:36.2973071Z * [new branch] revert-111036-skylion007/backport-2-1-1-2023-10-11-0 -> origin/revert-111036-skylion007/backport-2-1-1-2023-10-11-0 2025-03-17T17:41:36.2973597Z * [new branch] revert-112125 -> origin/revert-112125 2025-03-17T17:41:36.2976029Z * [new branch] revert-131069-gh/krzysztofjordan/1/head -> origin/revert-131069-gh/krzysztofjordan/1/head 2025-03-17T17:41:36.2978166Z * [new branch] revert-131469-gh/andrewor14/51/head -> origin/revert-131469-gh/andrewor14/51/head 2025-03-17T17:41:36.2979249Z * [new branch] revert_realize_input_ExternKernel -> origin/revert_realize_input_ExternKernel 2025-03-17T17:41:36.2980230Z * [new branch] rohan-varma-patch-13 -> origin/rohan-varma-patch-13 2025-03-17T17:41:36.2981441Z * [new branch] rohan-varma-patch-14 -> origin/rohan-varma-patch-14 2025-03-17T17:41:36.2982448Z * [new branch] rohan-varma-patch-15 -> origin/rohan-varma-patch-15 2025-03-17T17:41:36.2983501Z * [new branch] rohan-varma-patch-16 -> origin/rohan-varma-patch-16 2025-03-17T17:41:36.2984463Z * [new branch] rprop-playground -> origin/rprop-playground 2025-03-17T17:41:36.2985548Z * [new branch] run-ios-test-device-farm -> origin/run-ios-test-device-farm 2025-03-17T17:41:36.2987381Z * [new branch] ryanguo99/cleanup-dynamo-expected-failures -> origin/ryanguo99/cleanup-dynamo-expected-failures 2025-03-17T17:41:36.2988014Z * [new branch] ryanguo99/fix-closure-var -> origin/ryanguo99/fix-closure-var 2025-03-17T17:41:36.2989431Z * [new branch] rzou/cache_name -> origin/rzou/cache_name 2025-03-17T17:41:36.2990356Z * [new branch] rzou/faketensor_bench -> origin/rzou/faketensor_bench 2025-03-17T17:41:36.2991220Z * [new branch] rzou/fix -> origin/rzou/fix 2025-03-17T17:41:36.2992168Z * [new branch] rzou/fix2 -> origin/rzou/fix2 2025-03-17T17:41:36.2993067Z * [new branch] rzou/njt -> origin/rzou/njt 2025-03-17T17:41:36.2993993Z * [new branch] rzou/operator -> origin/rzou/operator 2025-03-17T17:41:36.2995010Z * [new branch] rzou/pca -> origin/rzou/pca 2025-03-17T17:41:36.2995967Z * [new branch] rzou/pipe_split -> origin/rzou/pipe_split 2025-03-17T17:41:36.2996852Z * [new branch] rzou/realprop -> origin/rzou/realprop 2025-03-17T17:41:36.2997789Z * [new branch] rzou/setup_context -> origin/rzou/setup_context 2025-03-17T17:41:36.2999554Z * [new branch] sanchitintel/fix_llama_da8w8_corner_case -> origin/sanchitintel/fix_llama_da8w8_corner_case 2025-03-17T17:41:36.3000693Z * [new branch] sanchitintel/gemm_template_avoid_malloc_lock_contention -> origin/sanchitintel/gemm_template_avoid_malloc_lock_contention 2025-03-17T17:41:36.3001575Z * [new branch] sanchitintel/modify_fp32_micro_gemm -> origin/sanchitintel/modify_fp32_micro_gemm 2025-03-17T17:41:36.3002819Z * [new branch] sanchitintel/refactor_aten_int8_woq_gemm -> origin/sanchitintel/refactor_aten_int8_woq_gemm 2025-03-17T17:41:36.3004560Z * [new branch] sanchitintel/weird_thing_with_test_cpu_select_algorithm -> origin/sanchitintel/weird_thing_with_test_cpu_select_algorithm 2025-03-17T17:41:36.3005403Z * [new branch] sanchitintel/woq_gemm_buf_size_patch -> origin/sanchitintel/woq_gemm_buf_size_patch 2025-03-17T17:41:36.3007051Z * [new branch] sanchitj/remove_duplicate_line_from_freezing.py -> origin/sanchitj/remove_duplicate_line_from_freezing.py 2025-03-17T17:41:36.3007734Z * [new branch] sapling-pr-archive-SS-JIA -> origin/sapling-pr-archive-SS-JIA 2025-03-17T17:41:36.3008723Z * [new branch] scatter-dim -> origin/scatter-dim 2025-03-17T17:41:36.3009705Z * [new branch] sdpa_autocast_cpu -> origin/sdpa_autocast_cpu 2025-03-17T17:41:36.3011047Z * [new branch] sdym/2.5.1 -> origin/sdym/2.5.1 2025-03-17T17:41:36.3012059Z * [new branch] sdym/docker-python-3.8 -> origin/sdym/docker-python-3.8 2025-03-17T17:41:36.3013015Z * [new branch] sdym/revert-107846 -> origin/sdym/revert-107846 2025-03-17T17:41:36.3013926Z * [new branch] sdym/revert-109859 -> origin/sdym/revert-109859 2025-03-17T17:41:36.3014951Z * [new branch] sdym/skip-asan -> origin/sdym/skip-asan 2025-03-17T17:41:36.3016474Z * [new branch] sdym/todo-docstring -> origin/sdym/todo-docstring 2025-03-17T17:41:36.3017683Z * [new branch] sdym/torchfix -> origin/sdym/torchfix 2025-03-17T17:41:36.3019063Z * [new branch] sdym/torchvision-pretrained -> origin/sdym/torchvision-pretrained 2025-03-17T17:41:36.3020165Z * [new branch] sdym/typed-storage -> origin/sdym/typed-storage 2025-03-17T17:41:36.3021471Z * [new branch] sdym/wno -> origin/sdym/wno 2025-03-17T17:41:36.3022848Z * [new branch] seemethere-patch-1 -> origin/seemethere-patch-1 2025-03-17T17:41:36.3024604Z * [new branch] seemethere/add_h100_nightly_perf_benchmarks -> origin/seemethere/add_h100_nightly_perf_benchmarks 2025-03-17T17:41:36.3025330Z * [new branch] share_and_pin_fork -> origin/share_and_pin_fork 2025-03-17T17:41:36.3026951Z * [new branch] shengf/fx-xform-perf -> origin/shengf/fx-xform-perf 2025-03-17T17:41:36.3027987Z * [new branch] shikaili_fp8_allgather -> origin/shikaili_fp8_allgather 2025-03-17T17:41:36.3029425Z * [new branch] shunting-multi-kernel-2 -> origin/shunting-multi-kernel-2 2025-03-17T17:41:36.3030411Z * [new branch] shunting-multi-kernel-3 -> origin/shunting-multi-kernel-3 2025-03-17T17:41:36.3031483Z * [new branch] shunting-scale-down-rblock -> origin/shunting-scale-down-rblock 2025-03-17T17:41:36.3032530Z * [new branch] shunting-tigher-upperbound -> origin/shunting-tigher-upperbound 2025-03-17T17:41:36.3033556Z * [new branch] shunting-triton-pin-update-5 -> origin/shunting-triton-pin-update-5 2025-03-17T17:41:36.3034665Z * [new branch] simplify-fq-per-channel -> origin/simplify-fq-per-channel 2025-03-17T17:41:36.3035960Z * [new branch] source_fn_stack -> origin/source_fn_stack 2025-03-17T17:41:36.3037420Z * [new branch] speedup-mps-string-key -> origin/speedup-mps-string-key 2025-03-17T17:41:36.3038779Z * [new branch] sqzhang/flight4 -> origin/sqzhang/flight4 2025-03-17T17:41:36.3039755Z * [new branch] sqzhang/flight4plus -> origin/sqzhang/flight4plus 2025-03-17T17:41:36.3041339Z * [new branch] sraikund/record_funct_test -> origin/sraikund/record_funct_test 2025-03-17T17:41:36.3042314Z * [new branch] sraikund16/test -> origin/sraikund16/test 2025-03-17T17:41:36.3043488Z * [new branch] stable-library -> origin/stable-library 2025-03-17T17:41:36.3044599Z * [new branch] subscribe_codeowners_lucasllc -> origin/subscribe_codeowners_lucasllc 2025-03-17T17:41:36.3045899Z * [new branch] super -> origin/super 2025-03-17T17:41:36.3047034Z * [new branch] svekars-patch-7 -> origin/svekars-patch-7 2025-03-17T17:41:36.3048313Z * [new branch] switch-bn -> origin/switch-bn 2025-03-17T17:41:36.3049360Z * [new branch] switch-to-new-theme -> origin/switch-to-new-theme 2025-03-17T17:41:36.3050501Z * [new branch] sympy-bottleneck-repro -> origin/sympy-bottleneck-repro 2025-03-17T17:41:36.3051835Z * [new branch] teja/dcp_poc -> origin/teja/dcp_poc 2025-03-17T17:41:36.3052832Z * [new branch] tensor_life -> origin/tensor_life 2025-03-17T17:41:36.3053991Z * [new branch] tensordict_integration -> origin/tensordict_integration 2025-03-17T17:41:36.3055194Z * [new branch] test-move-conda-builds -> origin/test-move-conda-builds 2025-03-17T17:41:36.3056294Z * [new branch] test-torchvision-install-ci -> origin/test-torchvision-install-ci 2025-03-17T17:41:36.3057514Z * [new branch] test/inductor -> origin/test/inductor 2025-03-17T17:41:36.3058625Z * [new branch] test_od_cudnn_bn_qat_fusion -> origin/test_od_cudnn_bn_qat_fusion 2025-03-17T17:41:36.3059615Z * [new branch] tidy_performance_cyy -> origin/tidy_performance_cyy 2025-03-17T17:41:36.3060666Z * [new branch] torch-abi-version -> origin/torch-abi-version 2025-03-17T17:41:36.3061681Z * [new branch] torchgen_ns -> origin/torchgen_ns 2025-03-17T17:41:36.3062803Z * [new branch] trace_fsdp_torchtune_lora -> origin/trace_fsdp_torchtune_lora 2025-03-17T17:41:36.3063819Z * [new branch] traceable_fsdp_unit_tests -> origin/traceable_fsdp_unit_tests 2025-03-17T17:41:36.3064848Z * [new branch] tree_loop_vec_base -> origin/tree_loop_vec_base 2025-03-17T17:41:36.3066025Z * [new branch] tree_vec_base -> origin/tree_vec_base 2025-03-17T17:41:36.3067162Z * [new branch] triton-cpu-arm-expriment -> origin/triton-cpu-arm-expriment 2025-03-17T17:41:36.3068332Z * [new branch] triton-update -> origin/triton-update 2025-03-17T17:41:36.3069230Z * [new branch] triton_kernel -> origin/triton_kernel 2025-03-17T17:41:36.3070302Z * [new branch] triton_kernel_perf -> origin/triton_kernel_perf 2025-03-17T17:41:36.3071308Z * [new branch] try-speedup-docbuild -> origin/try-speedup-docbuild 2025-03-17T17:41:36.3072213Z * [new branch] type_dec -> origin/type_dec 2025-03-17T17:41:36.3073417Z * [new branch] unbreak_cpp_builder_clang -> origin/unbreak_cpp_builder_clang 2025-03-17T17:41:36.3075124Z * [new branch] update-audio-commit-hash/13210264744-1454-1 -> origin/update-audio-commit-hash/13210264744-1454-1 2025-03-17T17:41:36.3076056Z * [new branch] update-audio-commit-hash/13402729107-1466-1 -> origin/update-audio-commit-hash/13402729107-1466-1 2025-03-17T17:41:36.3077347Z * [new branch] update-executorch-commit-hash/12838938822-1425-1 -> origin/update-executorch-commit-hash/12838938822-1425-1 2025-03-17T17:41:36.3078301Z * [new branch] update-executorch-commit-hash/13319730828-1460-1 -> origin/update-executorch-commit-hash/13319730828-1460-1 2025-03-17T17:41:36.3079311Z * [new branch] update-executorch-commit-hash/13339750520-1461-1 -> origin/update-executorch-commit-hash/13339750520-1461-1 2025-03-17T17:41:36.3080144Z * [new branch] update-executorch-commit-hash/13349943940-1462-1 -> origin/update-executorch-commit-hash/13349943940-1462-1 2025-03-17T17:41:36.3081058Z * [new branch] update-executorch-commit-hash/13360269739-1463-1 -> origin/update-executorch-commit-hash/13360269739-1463-1 2025-03-17T17:41:36.3082230Z * [new branch] update-executorch-commit-hash/13380672687-1464-1 -> origin/update-executorch-commit-hash/13380672687-1464-1 2025-03-17T17:41:36.3083573Z * [new branch] update-executorch-commit-hash/13402729107-1466-1 -> origin/update-executorch-commit-hash/13402729107-1466-1 2025-03-17T17:41:36.3085002Z * [new branch] update-triton-commit-hash/13663274526-1487-2 -> origin/update-triton-commit-hash/13663274526-1487-2 2025-03-17T17:41:36.3086708Z * [new branch] update-vision-commit-hash/6210383723-710-1 -> origin/update-vision-commit-hash/6210383723-710-1 2025-03-17T17:41:36.3087639Z * [new branch] update-vision-commit-hash/6319671985-721-1 -> origin/update-vision-commit-hash/6319671985-721-1 2025-03-17T17:41:36.3088521Z * [new branch] update-vision-commit-hash/6345577305-723-1 -> origin/update-vision-commit-hash/6345577305-723-1 2025-03-17T17:41:36.3089474Z * [new branch] update-vision-commit-hash/6366568705-725-1 -> origin/update-vision-commit-hash/6366568705-725-1 2025-03-17T17:41:36.3090431Z * [new branch] update-vision-commit-hash/6386942932-727-1 -> origin/update-vision-commit-hash/6386942932-727-1 2025-03-17T17:41:36.3091380Z * [new branch] update-vision-commit-hash/6399845260-728-1 -> origin/update-vision-commit-hash/6399845260-728-1 2025-03-17T17:41:36.3092613Z * [new branch] update-vision-commit-hash/6412969951-729-1 -> origin/update-vision-commit-hash/6412969951-729-1 2025-03-17T17:41:36.3093876Z * [new branch] update-vision-commit-hash/6425844356-730-1 -> origin/update-vision-commit-hash/6425844356-730-1 2025-03-17T17:41:36.3094836Z * [new branch] update-vision-commit-hash/6463026337-734-1 -> origin/update-vision-commit-hash/6463026337-734-1 2025-03-17T17:41:36.3095845Z * [new branch] update-vision-commit-hash/6489506557-736-1 -> origin/update-vision-commit-hash/6489506557-736-1 2025-03-17T17:41:36.3096804Z * [new branch] update-vision-commit-hash/6520762621-739-1 -> origin/update-vision-commit-hash/6520762621-739-1 2025-03-17T17:41:36.3097745Z * [new branch] update-vision-commit-hash/6581672893-744-1 -> origin/update-vision-commit-hash/6581672893-744-1 2025-03-17T17:41:36.3098815Z * [new branch] update-vision-commit-hash/6593929043-745-1 -> origin/update-vision-commit-hash/6593929043-745-1 2025-03-17T17:41:36.3099758Z * [new branch] update-vision-commit-hash/6634009725-750-1 -> origin/update-vision-commit-hash/6634009725-750-1 2025-03-17T17:41:36.3100744Z * [new branch] update-vision-commit-hash/6673463792-754-1 -> origin/update-vision-commit-hash/6673463792-754-1 2025-03-17T17:41:36.3101689Z * [new branch] update-vision-commit-hash/6700258936-758-1 -> origin/update-vision-commit-hash/6700258936-758-1 2025-03-17T17:41:36.3102663Z * [new branch] update-vision-commit-hash/6805589684-770-1 -> origin/update-vision-commit-hash/6805589684-770-1 2025-03-17T17:41:36.3103612Z * [new branch] update-vision-commit-hash/6818989957-773-1 -> origin/update-vision-commit-hash/6818989957-773-1 2025-03-17T17:41:36.3104545Z * [new branch] update-vision-commit-hash/6830864778-774-1 -> origin/update-vision-commit-hash/6830864778-774-1 2025-03-17T17:41:36.3105535Z * [new branch] update-vision-commit-hash/6857388096-777-1 -> origin/update-vision-commit-hash/6857388096-777-1 2025-03-17T17:41:36.3106622Z * [new branch] update-vision-commit-hash/6871122584-778-1 -> origin/update-vision-commit-hash/6871122584-778-1 2025-03-17T17:41:36.3107544Z * [new branch] update-vision-commit-hash/6884505667-779-1 -> origin/update-vision-commit-hash/6884505667-779-1 2025-03-17T17:41:36.3108436Z * [new branch] update-vision-commit-hash/9010274985-1089-1 -> origin/update-vision-commit-hash/9010274985-1089-1 2025-03-17T17:41:36.3109990Z * [new branch] update-xla-commit-hash/10140112669-125-1 -> origin/update-xla-commit-hash/10140112669-125-1 2025-03-17T17:41:36.3110789Z * [new branch] update-xla-commit-hash/6219563710-79-1 -> origin/update-xla-commit-hash/6219563710-79-1 2025-03-17T17:41:36.3111808Z * [new branch] update-xla-commit-hash/6296332542-80-1 -> origin/update-xla-commit-hash/6296332542-80-1 2025-03-17T17:41:36.3112819Z * [new branch] update-xla-commit-hash/6377302016-81-1 -> origin/update-xla-commit-hash/6377302016-81-1 2025-03-17T17:41:36.3113907Z * [new branch] update-xla-commit-hash/6453689944-82-1 -> origin/update-xla-commit-hash/6453689944-82-1 2025-03-17T17:41:36.3114895Z * [new branch] update-xla-commit-hash/6530489691-83-1 -> origin/update-xla-commit-hash/6530489691-83-1 2025-03-17T17:41:36.3116125Z * [new branch] update-xla-commit-hash/6610159969-84-1 -> origin/update-xla-commit-hash/6610159969-84-1 2025-03-17T17:41:36.3117518Z * [new branch] update-xla-commit-hash/6689695021-85-1 -> origin/update-xla-commit-hash/6689695021-85-1 2025-03-17T17:41:36.3118793Z * [new branch] update-xla-commit-hash/6767672412-86-1 -> origin/update-xla-commit-hash/6767672412-86-1 2025-03-17T17:41:36.3119860Z * [new branch] update-xla-commit-hash/6846986487-87-1 -> origin/update-xla-commit-hash/6846986487-87-1 2025-03-17T17:41:36.3121164Z * [new branch] update_docs_torch_multinomial_issue#125388 -> origin/update_docs_torch_multinomial_issue#125388 2025-03-17T17:41:36.3121942Z * [new branch] update_kineto_0212_3 -> origin/update_kineto_0212_3 2025-03-17T17:41:36.3123161Z * [new branch] update_kineto_0214 -> origin/update_kineto_0214 2025-03-17T17:41:36.3124277Z * [new branch] update_slow_tests_1722488736 -> origin/update_slow_tests_1722488736 2025-03-17T17:41:36.3125439Z * [new branch] update_slow_tests_1722879173 -> origin/update_slow_tests_1722879173 2025-03-17T17:41:36.3126492Z * [new branch] update_slow_tests_1739173241 -> origin/update_slow_tests_1739173241 2025-03-17T17:41:36.3127556Z * [new branch] update_slow_tests_1739777990 -> origin/update_slow_tests_1739777990 2025-03-17T17:41:36.3128968Z * [new branch] update_slow_tests_1740382789 -> origin/update_slow_tests_1740382789 2025-03-17T17:41:36.3129937Z * [new branch] update_slow_tests_1741592409 -> origin/update_slow_tests_1741592409 2025-03-17T17:41:36.3131062Z * [new branch] update_slow_tests_1742197223 -> origin/update_slow_tests_1742197223 2025-03-17T17:41:36.3132328Z * [new branch] update_submodule_FBGEMM -> origin/update_submodule_FBGEMM 2025-03-17T17:41:36.3133274Z * [new branch] update_submodule_kineto -> origin/update_submodule_kineto 2025-03-17T17:41:36.3134562Z * [new branch] use-better-label-for-dcp -> origin/use-better-label-for-dcp 2025-03-17T17:41:36.3135705Z * [new branch] v0.1.2 -> origin/v0.1.2 2025-03-17T17:41:36.3137222Z * [new branch] v1.0.1 -> origin/v1.0.1 2025-03-17T17:41:36.3141401Z * [new branch] v1.0.3 -> origin/v1.0.3 2025-03-17T17:41:36.3143011Z * [new branch] v1.1.0 -> origin/v1.1.0 2025-03-17T17:41:36.3144261Z * [new branch] v1.2.0 -> origin/v1.2.0 2025-03-17T17:41:36.3145491Z * [new branch] v1.3.0 -> origin/v1.3.0 2025-03-17T17:41:36.3146939Z * [new branch] v1.3.1 -> origin/v1.3.1 2025-03-17T17:41:36.3148077Z * [new branch] validate_fn -> origin/validate_fn 2025-03-17T17:41:36.3149323Z * [new branch] validations_2.6 -> origin/validations_2.6 2025-03-17T17:41:36.3150547Z * [new branch] vfdev-5-patch-2 -> origin/vfdev-5-patch-2 2025-03-17T17:41:36.3151937Z * [new branch] viable/strict -> origin/viable/strict 2025-03-17T17:41:36.3153500Z * [new branch] voz/fsdp_autograd2 -> origin/voz/fsdp_autograd2 2025-03-17T17:41:36.3154410Z * [new branch] voz/fsdp_autograd4 -> origin/voz/fsdp_autograd4 2025-03-17T17:41:36.3155453Z * [new branch] voz/fsdp_autograd_merge -> origin/voz/fsdp_autograd_merge 2025-03-17T17:41:36.3156447Z * [new branch] voz/fsdp_autograd_merge2 -> origin/voz/fsdp_autograd_merge2 2025-03-17T17:41:36.3157293Z * [new branch] voz/serde2 -> origin/voz/serde2 2025-03-17T17:41:36.3158435Z * [new branch] voz/soft_fork_autograd_fsdp -> origin/voz/soft_fork_autograd_fsdp 2025-03-17T17:41:36.3159859Z * [new branch] wdvr/add_boto3 -> origin/wdvr/add_boto3 2025-03-17T17:41:36.3160965Z * [new branch] wdvr/iss145259_alt -> origin/wdvr/iss145259_alt 2025-03-17T17:41:36.3161899Z * [new branch] wdvr/iss_145259 -> origin/wdvr/iss_145259 2025-03-17T17:41:36.3162917Z * [new branch] wdvr/sccache_nvcc -> origin/wdvr/sccache_nvcc 2025-03-17T17:41:36.3163959Z * [new branch] wdvr/sccache_simplified -> origin/wdvr/sccache_simplified 2025-03-17T17:41:36.3164957Z * [new branch] wdvr/xpu_sccache_fix -> origin/wdvr/xpu_sccache_fix 2025-03-17T17:41:36.3166460Z * [new branch] whc/flight -> origin/whc/flight 2025-03-17T17:41:36.3167797Z * [new branch] whc/flight4 -> origin/whc/flight4 2025-03-17T17:41:36.3168975Z * [new branch] whc/flight51 -> origin/whc/flight51 2025-03-17T17:41:36.3169934Z * [new branch] whc/flight53 -> origin/whc/flight53 2025-03-17T17:41:36.3171487Z * [new branch] whc/flight_full -> origin/whc/flight_full 2025-03-17T17:41:36.3172435Z * [new branch] whc/flightbase -> origin/whc/flightbase 2025-03-17T17:41:36.3173438Z * [new branch] whc/p2phang -> origin/whc/p2phang 2025-03-17T17:41:36.3174644Z * [new branch] whc/stage2 -> origin/whc/stage2 2025-03-17T17:41:36.3176155Z * [new branch] xmfan/ca_5a2be192d1 -> origin/xmfan/ca_5a2be192d1 2025-03-17T17:41:36.3177105Z * [new branch] xmfan/ca_api -> origin/xmfan/ca_api 2025-03-17T17:41:36.3178112Z * [new branch] xmfan/ca_base -> origin/xmfan/ca_base 2025-03-17T17:41:36.3179153Z * [new branch] xmfan/ca_cudagraphs -> origin/xmfan/ca_cudagraphs 2025-03-17T17:41:36.3180069Z * [new branch] xmfan/ca_dynamic -> origin/xmfan/ca_dynamic 2025-03-17T17:41:36.3181079Z * [new branch] xmfan/ca_fix_dyn -> origin/xmfan/ca_fix_dyn 2025-03-17T17:41:36.3182076Z * [new branch] xmfan/ca_fix_lowering -> origin/xmfan/ca_fix_lowering 2025-03-17T17:41:36.3182914Z * [new branch] xmfan/ca_jan3 -> origin/xmfan/ca_jan3 2025-03-17T17:41:36.3183876Z * [new branch] xmfan/ca_jun18 -> origin/xmfan/ca_jun18 2025-03-17T17:41:36.3185346Z * [new branch] xmfan/ca_jun24 -> origin/xmfan/ca_jun24 2025-03-17T17:41:36.3186821Z * [new branch] xmfan/ca_mem_base -> origin/xmfan/ca_mem_base 2025-03-17T17:41:36.3187949Z * [new branch] xmfan/ca_mem_fix -> origin/xmfan/ca_mem_fix 2025-03-17T17:41:36.3188894Z * [new branch] xmfan/ca_memory_fix -> origin/xmfan/ca_memory_fix 2025-03-17T17:41:36.3190013Z * [new branch] xmfan/ca_memory_fix_rebased -> origin/xmfan/ca_memory_fix_rebased 2025-03-17T17:41:36.3191085Z * [new branch] xmfan/ca_memory_fix_rebased2 -> origin/xmfan/ca_memory_fix_rebased2 2025-03-17T17:41:36.3192040Z * [new branch] xmfan/ca_move_to_cuda -> origin/xmfan/ca_move_to_cuda 2025-03-17T17:41:36.3193232Z * [new branch] xmfan/ca_overhead -> origin/xmfan/ca_overhead 2025-03-17T17:41:36.3194385Z * [new branch] xmfan/ca_overhead_0eba7e5451 -> origin/xmfan/ca_overhead_0eba7e5451 2025-03-17T17:41:36.3195272Z * [new branch] xmfan/ca_scalar -> origin/xmfan/ca_scalar 2025-03-17T17:41:36.3196392Z * [new branch] xmfan/ca_subclass_mem_fix -> origin/xmfan/ca_subclass_mem_fix 2025-03-17T17:41:36.3197346Z * [new branch] xmfan/ca_warm_mem -> origin/xmfan/ca_warm_mem 2025-03-17T17:41:36.3198398Z * [new branch] xmfan/ca_warm_mem_base -> origin/xmfan/ca_warm_mem_base 2025-03-17T17:41:36.3199357Z * [new branch] xmfan/cacu_jun18 -> origin/xmfan/cacu_jun18 2025-03-17T17:41:36.3200383Z * [new branch] xmfan/cacu_jun19 -> origin/xmfan/cacu_jun19 2025-03-17T17:41:36.3201367Z * [new branch] xmfan/cacu_jun4 -> origin/xmfan/cacu_jun4 2025-03-17T17:41:36.3202647Z * [new branch] xmfan/cacu_may27 -> origin/xmfan/cacu_may27 2025-03-17T17:41:36.3204179Z * [new branch] xmfan/circular_dep -> origin/xmfan/circular_dep 2025-03-17T17:41:36.3205310Z * [new branch] xmfan/compiled_autograd_bench -> origin/xmfan/compiled_autograd_bench 2025-03-17T17:41:36.3206469Z * [new branch] xmfan/compiled_autograd_bench_base -> origin/xmfan/compiled_autograd_bench_base 2025-03-17T17:41:36.3207531Z * [new branch] xmfan/compiled_autograd_benchmark -> origin/xmfan/compiled_autograd_benchmark 2025-03-17T17:41:36.3208422Z * [new branch] xmfan/compiled_autograd_ddp -> origin/xmfan/compiled_autograd_ddp 2025-03-17T17:41:36.3209569Z * [new branch] xmfan/compiled_autograd_feb_29 -> origin/xmfan/compiled_autograd_feb_29 2025-03-17T17:41:36.3210735Z * [new branch] xmfan/compiled_autograd_graph_breaks -> origin/xmfan/compiled_autograd_graph_breaks 2025-03-17T17:41:36.3211633Z * [new branch] xmfan/compiled_autograd_hud -> origin/xmfan/compiled_autograd_hud 2025-03-17T17:41:36.3213445Z * [new branch] xmfan/compiled_autograd_hypothetical_perf -> origin/xmfan/compiled_autograd_hypothetical_perf 2025-03-17T17:41:36.3214338Z * [new branch] xmfan/compiled_autograd_perf_no_reuse -> origin/xmfan/compiled_autograd_perf_no_reuse 2025-03-17T17:41:36.3215207Z * [new branch] xmfan/disable_duck_shape -> origin/xmfan/disable_duck_shape 2025-03-17T17:41:36.3216277Z * [new branch] xmfan/distributed_torchbench -> origin/xmfan/distributed_torchbench 2025-03-17T17:41:36.3217340Z * [new branch] xmfan/fca_cpp_node_passthrough -> origin/xmfan/fca_cpp_node_passthrough 2025-03-17T17:41:36.3218344Z * [new branch] xmfan/feb_10_compiled_autograd -> origin/xmfan/feb_10_compiled_autograd 2025-03-17T17:41:36.3219582Z * [new branch] xmfan/feb_10_compiled_autograd_cudagraph -> origin/xmfan/feb_10_compiled_autograd_cudagraph 2025-03-17T17:41:36.3220397Z * [new branch] xmfan/fsdp_wraps -> origin/xmfan/fsdp_wraps 2025-03-17T17:41:36.3221414Z * [new branch] xmfan/issue_123374 -> origin/xmfan/issue_123374 2025-03-17T17:41:36.3222572Z * [new branch] xmfan/oss_benchmark_script -> origin/xmfan/oss_benchmark_script 2025-03-17T17:41:36.3223660Z * [new branch] xmfan/rename_nanogpt -> origin/xmfan/rename_nanogpt 2025-03-17T17:41:36.3224948Z * [new branch] xmfan/retains_grad_hooks -> origin/xmfan/retains_grad_hooks 2025-03-17T17:41:36.3225827Z * [new branch] xmfan/segfault_test -> origin/xmfan/segfault_test 2025-03-17T17:41:36.3226936Z * [new branch] xmfan/single_step -> origin/xmfan/single_step 2025-03-17T17:41:36.3227978Z * [new branch] xmfan/sth_0829 -> origin/xmfan/sth_0829 2025-03-17T17:41:36.3229170Z * [new branch] xmfan/test -> origin/xmfan/test 2025-03-17T17:41:36.3230188Z * [new branch] xmfan/yolov3_oom -> origin/xmfan/yolov3_oom 2025-03-17T17:41:36.3231817Z * [new branch] yguo/debug-0226-constexpr -> origin/yguo/debug-0226-constexpr 2025-03-17T17:41:36.3232886Z * [new branch] yguo/fix-remaining-cpp-wrapper -> origin/yguo/fix-remaining-cpp-wrapper 2025-03-17T17:41:36.3233705Z * [new branch] yguo/new_latest_changes -> origin/yguo/new_latest_changes 2025-03-17T17:41:36.3234770Z * [new branch] yguo/patch_constexpr_changes -> origin/yguo/patch_constexpr_changes 2025-03-17T17:41:36.3236400Z * [new branch] yguo/repro-segfault-triton-aoti-cpp-wrapper -> origin/yguo/repro-segfault-triton-aoti-cpp-wrapper 2025-03-17T17:41:36.3237429Z * [new branch] yihan_quantization -> origin/yihan_quantization 2025-03-17T17:41:36.3239176Z * [new branch] yiming/bootcamp -> origin/yiming/bootcamp 2025-03-17T17:41:36.3240621Z * [new branch] zainr/canary-test -> origin/zainr/canary-test 2025-03-17T17:41:36.3241779Z * [new branch] zainr/historical-correlation-fix -> origin/zainr/historical-correlation-fix 2025-03-17T17:41:36.3242529Z * [new branch] zainr/lint-fix -> origin/zainr/lint-fix 2025-03-17T17:41:36.3244057Z * [new branch] zainr/make-unstable -> origin/zainr/make-unstable 2025-03-17T17:41:36.3245002Z * [new branch] zainr/metrics-job-id -> origin/zainr/metrics-job-id 2025-03-17T17:41:36.3245998Z * [new branch] zainr/metrics-pr -> origin/zainr/metrics-pr 2025-03-17T17:41:36.3247020Z * [new branch] zainr/mypy-break-test -> origin/zainr/mypy-break-test 2025-03-17T17:41:36.3248377Z * [new branch] zainr/mypy-break-test2 -> origin/zainr/mypy-break-test2 2025-03-17T17:41:36.3249826Z * [new branch] zainr/mypy-break-test3 -> origin/zainr/mypy-break-test3 2025-03-17T17:41:36.3250832Z * [new branch] zainr/mypy-update -> origin/zainr/mypy-update 2025-03-17T17:41:36.3251944Z * [new branch] zainr/pull-migration-c -> origin/zainr/pull-migration-c 2025-03-17T17:41:36.3253223Z * [new branch] zainr/revert-60576419a2a-make-dynamic -> origin/zainr/revert-60576419a2a-make-dynamic 2025-03-17T17:41:36.3254183Z * [new branch] zainr/sha-checking -> origin/zainr/sha-checking 2025-03-17T17:41:36.3255249Z * [new branch] zainr/td-baseline-stats -> origin/zainr/td-baseline-stats 2025-03-17T17:41:36.3256239Z * [new branch] zainr/td-class -> origin/zainr/td-class 2025-03-17T17:41:36.3258056Z * [new branch] zainr/td-class-metrics -> origin/zainr/td-class-metrics 2025-03-17T17:41:36.3259057Z * [new branch] zainr/td-downgrade -> origin/zainr/td-downgrade 2025-03-17T17:41:36.3260114Z * [new branch] zainr/td-file-pass -> origin/zainr/td-file-pass 2025-03-17T17:41:36.3261213Z * [new branch] zainr/td-metrics-v2 -> origin/zainr/td-metrics-v2 2025-03-17T17:41:36.3262259Z * [new branch] zainr/td-pass-class-times -> origin/zainr/td-pass-class-times 2025-03-17T17:41:36.3263448Z * [new branch] zainr/td-shard-info -> origin/zainr/td-shard-info 2025-03-17T17:41:36.3264236Z * [new branch] zainr/td-trial -> origin/zainr/td-trial 2025-03-17T17:41:36.3265264Z * [new branch] zainr/unstable -> origin/zainr/unstable 2025-03-17T17:41:36.3267043Z * [new branch] zainrizvi/testing1 -> origin/zainrizvi/testing1 2025-03-17T17:41:36.3268284Z * [new branch] zasdfgbnm-patch-3 -> origin/zasdfgbnm-patch-3 2025-03-17T17:41:36.3269390Z * [new branch] zb2p -> origin/zb2p 2025-03-17T17:41:36.3270730Z * [new branch] zdevito-patch-1 -> origin/zdevito-patch-1 2025-03-17T17:41:36.3272074Z * [new branch] zdevito-patch-2 -> origin/zdevito-patch-2 2025-03-17T17:41:36.3273325Z * [new branch] zeros-and-scatter-part2 -> origin/zeros-and-scatter-part2 2025-03-17T17:41:36.3274939Z * [new branch] zhxchen17/scratch/0 -> origin/zhxchen17/scratch/0 2025-03-17T17:41:36.3276448Z * [new branch] zhxchen17/sticky_cache/0 -> origin/zhxchen17/sticky_cache/0 2025-03-17T17:41:36.3277926Z * [new branch] zxiiro/editor-config -> origin/zxiiro/editor-config 2025-03-17T17:41:36.3279255Z * [new tag] bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug -> bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug 2025-03-17T17:41:36.3279794Z * [new tag] ci/binaries/77164 -> ci/binaries/77164 2025-03-17T17:41:36.3281414Z * [new tag] ciflow/all/70978 -> ciflow/all/70978 2025-03-17T17:41:36.3282187Z * [new tag] ciflow/all/70979 -> ciflow/all/70979 2025-03-17T17:41:36.3283117Z * [new tag] ciflow/all/70989 -> ciflow/all/70989 2025-03-17T17:41:36.3284064Z * [new tag] ciflow/binaries/120076 -> ciflow/binaries/120076 2025-03-17T17:41:36.3284917Z * [new tag] ciflow/binaries/138996 -> ciflow/binaries/138996 2025-03-17T17:41:36.3285754Z * [new tag] ciflow/binaries/143416 -> ciflow/binaries/143416 2025-03-17T17:41:36.3286587Z * [new tag] ciflow/binaries/144127 -> ciflow/binaries/144127 2025-03-17T17:41:36.3287365Z * [new tag] ciflow/binaries/145119 -> ciflow/binaries/145119 2025-03-17T17:41:36.3288239Z * [new tag] ciflow/binaries/145224 -> ciflow/binaries/145224 2025-03-17T17:41:36.3289086Z * [new tag] ciflow/binaries/146717 -> ciflow/binaries/146717 2025-03-17T17:41:36.3289939Z * [new tag] ciflow/binaries/147498 -> ciflow/binaries/147498 2025-03-17T17:41:36.3290750Z * [new tag] ciflow/binaries/147664 -> ciflow/binaries/147664 2025-03-17T17:41:36.3291484Z * [new tag] ciflow/binaries/147917 -> ciflow/binaries/147917 2025-03-17T17:41:36.3292200Z * [new tag] ciflow/binaries/148163 -> ciflow/binaries/148163 2025-03-17T17:41:36.3293223Z * [new tag] ciflow/binaries/148173 -> ciflow/binaries/148173 2025-03-17T17:41:36.3294096Z * [new tag] ciflow/binaries/149192 -> ciflow/binaries/149192 2025-03-17T17:41:36.3295115Z * [new tag] ciflow/binaries/149254 -> ciflow/binaries/149254 2025-03-17T17:41:36.3295940Z * [new tag] ciflow/binaries/149305 -> ciflow/binaries/149305 2025-03-17T17:41:36.3296906Z * [new tag] ciflow/binaries_wheel/138834 -> ciflow/binaries_wheel/138834 2025-03-17T17:41:36.3297492Z * [new tag] ciflow/binaries_wheel/143388 -> ciflow/binaries_wheel/143388 2025-03-17T17:41:36.3298279Z * [new tag] ciflow/binaries_wheel/144049 -> ciflow/binaries_wheel/144049 2025-03-17T17:41:36.3298914Z * [new tag] ciflow/binaries_wheel/146055 -> ciflow/binaries_wheel/146055 2025-03-17T17:41:36.3299772Z * [new tag] ciflow/binaries_wheel/146573 -> ciflow/binaries_wheel/146573 2025-03-17T17:41:36.3300632Z * [new tag] ciflow/binaries_wheel/147074 -> ciflow/binaries_wheel/147074 2025-03-17T17:41:36.3302200Z * [new tag] ciflow/binaries_wheel/147455 -> ciflow/binaries_wheel/147455 2025-03-17T17:41:36.3303525Z * [new tag] ciflow/binaries_wheel/148320 -> ciflow/binaries_wheel/148320 2025-03-17T17:41:36.3303758Z * [new tag] ciflow/binaries_wheel/149192 -> ciflow/binaries_wheel/149192 2025-03-17T17:41:36.3304409Z * [new tag] ciflow/cuda/70978 -> ciflow/cuda/70978 2025-03-17T17:41:36.3305145Z * [new tag] ciflow/cuda/70979 -> ciflow/cuda/70979 2025-03-17T17:41:36.3305511Z * [new tag] ciflow/cuda/70989 -> ciflow/cuda/70989 2025-03-17T17:41:36.3306567Z * [new tag] ciflow/inductor-micro-benchmark/141910 -> ciflow/inductor-micro-benchmark/141910 2025-03-17T17:41:36.3307573Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/148672 -> ciflow/inductor-perf-test-nightly-rocm/148672 2025-03-17T17:41:36.3308331Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/149039 -> ciflow/inductor-perf-test-nightly-rocm/149039 2025-03-17T17:41:36.3308938Z * [new tag] ciflow/inductor-periodic/145612 -> ciflow/inductor-periodic/145612 2025-03-17T17:41:36.3309867Z * [new tag] ciflow/inductor-periodic/147315 -> ciflow/inductor-periodic/147315 2025-03-17T17:41:36.3310775Z * [new tag] ciflow/inductor-rocm/140989 -> ciflow/inductor-rocm/140989 2025-03-17T17:41:36.3311550Z * [new tag] ciflow/inductor-rocm/141309 -> ciflow/inductor-rocm/141309 2025-03-17T17:41:36.3312267Z * [new tag] ciflow/inductor-rocm/146264 -> ciflow/inductor-rocm/146264 2025-03-17T17:41:36.3313134Z * [new tag] ciflow/inductor-rocm/146903 -> ciflow/inductor-rocm/146903 2025-03-17T17:41:36.3313675Z * [new tag] ciflow/inductor-rocm/147315 -> ciflow/inductor-rocm/147315 2025-03-17T17:41:36.3314582Z * [new tag] ciflow/inductor-rocm/147452 -> ciflow/inductor-rocm/147452 2025-03-17T17:41:36.3315447Z * [new tag] ciflow/inductor-rocm/147583 -> ciflow/inductor-rocm/147583 2025-03-17T17:41:36.3316144Z * [new tag] ciflow/inductor-rocm/148327 -> ciflow/inductor-rocm/148327 2025-03-17T17:41:36.3316867Z * [new tag] ciflow/inductor-rocm/149041 -> ciflow/inductor-rocm/149041 2025-03-17T17:41:36.3318172Z * [new tag] ciflow/inductor/110155 -> ciflow/inductor/110155 2025-03-17T17:41:36.3318835Z * [new tag] ciflow/inductor/113257 -> ciflow/inductor/113257 2025-03-17T17:41:36.3319557Z * [new tag] ciflow/inductor/119496 -> ciflow/inductor/119496 2025-03-17T17:41:36.3320130Z * [new tag] ciflow/inductor/119977 -> ciflow/inductor/119977 2025-03-17T17:41:36.3320782Z * [new tag] ciflow/inductor/120076 -> ciflow/inductor/120076 2025-03-17T17:41:36.3321387Z * [new tag] ciflow/inductor/121445 -> ciflow/inductor/121445 2025-03-17T17:41:36.3322020Z * [new tag] ciflow/inductor/124490 -> ciflow/inductor/124490 2025-03-17T17:41:36.3322605Z * [new tag] ciflow/inductor/125270 -> ciflow/inductor/125270 2025-03-17T17:41:36.3323229Z * [new tag] ciflow/inductor/125326 -> ciflow/inductor/125326 2025-03-17T17:41:36.3323830Z * [new tag] ciflow/inductor/125428 -> ciflow/inductor/125428 2025-03-17T17:41:36.3324531Z * [new tag] ciflow/inductor/125806 -> ciflow/inductor/125806 2025-03-17T17:41:36.3325339Z * [new tag] ciflow/inductor/125888 -> ciflow/inductor/125888 2025-03-17T17:41:36.3326383Z * [new tag] ciflow/inductor/125995 -> ciflow/inductor/125995 2025-03-17T17:41:36.3327279Z * [new tag] ciflow/inductor/126348 -> ciflow/inductor/126348 2025-03-17T17:41:36.3328013Z * [new tag] ciflow/inductor/127171 -> ciflow/inductor/127171 2025-03-17T17:41:36.3328578Z * [new tag] ciflow/inductor/127293 -> ciflow/inductor/127293 2025-03-17T17:41:36.3329188Z * [new tag] ciflow/inductor/127294 -> ciflow/inductor/127294 2025-03-17T17:41:36.3329916Z * [new tag] ciflow/inductor/129352 -> ciflow/inductor/129352 2025-03-17T17:41:36.3330639Z * [new tag] ciflow/inductor/129420 -> ciflow/inductor/129420 2025-03-17T17:41:36.3331225Z * [new tag] ciflow/inductor/130141 -> ciflow/inductor/130141 2025-03-17T17:41:36.3332124Z * [new tag] ciflow/inductor/130499 -> ciflow/inductor/130499 2025-03-17T17:41:36.3332778Z * [new tag] ciflow/inductor/130887 -> ciflow/inductor/130887 2025-03-17T17:41:36.3333452Z * [new tag] ciflow/inductor/131354 -> ciflow/inductor/131354 2025-03-17T17:41:36.3334027Z * [new tag] ciflow/inductor/132021 -> ciflow/inductor/132021 2025-03-17T17:41:36.3334682Z * [new tag] ciflow/inductor/132414 -> ciflow/inductor/132414 2025-03-17T17:41:36.3335361Z * [new tag] ciflow/inductor/133044 -> ciflow/inductor/133044 2025-03-17T17:41:36.3336002Z * [new tag] ciflow/inductor/133121 -> ciflow/inductor/133121 2025-03-17T17:41:36.3336575Z * [new tag] ciflow/inductor/133287 -> ciflow/inductor/133287 2025-03-17T17:41:36.3337424Z * [new tag] ciflow/inductor/133289 -> ciflow/inductor/133289 2025-03-17T17:41:36.3338123Z * [new tag] ciflow/inductor/133296 -> ciflow/inductor/133296 2025-03-17T17:41:36.3338780Z * [new tag] ciflow/inductor/133297 -> ciflow/inductor/133297 2025-03-17T17:41:36.3339507Z * [new tag] ciflow/inductor/133315 -> ciflow/inductor/133315 2025-03-17T17:41:36.3340123Z * [new tag] ciflow/inductor/133392 -> ciflow/inductor/133392 2025-03-17T17:41:36.3340769Z * [new tag] ciflow/inductor/133419 -> ciflow/inductor/133419 2025-03-17T17:41:36.3341403Z * [new tag] ciflow/inductor/133423 -> ciflow/inductor/133423 2025-03-17T17:41:36.3342057Z * [new tag] ciflow/inductor/133667 -> ciflow/inductor/133667 2025-03-17T17:41:36.3342672Z * [new tag] ciflow/inductor/133753 -> ciflow/inductor/133753 2025-03-17T17:41:36.3343708Z * [new tag] ciflow/inductor/134592 -> ciflow/inductor/134592 2025-03-17T17:41:36.3344249Z * [new tag] ciflow/inductor/134681 -> ciflow/inductor/134681 2025-03-17T17:41:36.3344870Z * [new tag] ciflow/inductor/135708 -> ciflow/inductor/135708 2025-03-17T17:41:36.3345509Z * [new tag] ciflow/inductor/135792 -> ciflow/inductor/135792 2025-03-17T17:41:36.3346175Z * [new tag] ciflow/inductor/136355 -> ciflow/inductor/136355 2025-03-17T17:41:36.3346956Z * [new tag] ciflow/inductor/136702 -> ciflow/inductor/136702 2025-03-17T17:41:36.3347597Z * [new tag] ciflow/inductor/137400 -> ciflow/inductor/137400 2025-03-17T17:41:36.3348230Z * [new tag] ciflow/inductor/137568 -> ciflow/inductor/137568 2025-03-17T17:41:36.3348951Z * [new tag] ciflow/inductor/137583 -> ciflow/inductor/137583 2025-03-17T17:41:36.3349914Z * [new tag] ciflow/inductor/137846 -> ciflow/inductor/137846 2025-03-17T17:41:36.3353254Z * [new tag] ciflow/inductor/137884 -> ciflow/inductor/137884 2025-03-17T17:41:36.3353651Z * [new tag] ciflow/inductor/138185 -> ciflow/inductor/138185 2025-03-17T17:41:36.3353974Z * [new tag] ciflow/inductor/138202 -> ciflow/inductor/138202 2025-03-17T17:41:36.3354559Z * [new tag] ciflow/inductor/138214 -> ciflow/inductor/138214 2025-03-17T17:41:36.3354965Z * [new tag] ciflow/inductor/138388 -> ciflow/inductor/138388 2025-03-17T17:41:36.3355350Z * [new tag] ciflow/inductor/138513 -> ciflow/inductor/138513 2025-03-17T17:41:36.3355591Z * [new tag] ciflow/inductor/138519 -> ciflow/inductor/138519 2025-03-17T17:41:36.3356117Z * [new tag] ciflow/inductor/138555 -> ciflow/inductor/138555 2025-03-17T17:41:36.3356555Z * [new tag] ciflow/inductor/138626 -> ciflow/inductor/138626 2025-03-17T17:41:36.3357190Z * [new tag] ciflow/inductor/139094 -> ciflow/inductor/139094 2025-03-17T17:41:36.3357851Z * [new tag] ciflow/inductor/139271 -> ciflow/inductor/139271 2025-03-17T17:41:36.3358487Z * [new tag] ciflow/inductor/139561 -> ciflow/inductor/139561 2025-03-17T17:41:36.3359197Z * [new tag] ciflow/inductor/139975 -> ciflow/inductor/139975 2025-03-17T17:41:36.3359800Z * [new tag] ciflow/inductor/140032 -> ciflow/inductor/140032 2025-03-17T17:41:36.3360679Z * [new tag] ciflow/inductor/140159 -> ciflow/inductor/140159 2025-03-17T17:41:36.3361239Z * [new tag] ciflow/inductor/140756 -> ciflow/inductor/140756 2025-03-17T17:41:36.3362513Z * [new tag] ciflow/inductor/140979 -> ciflow/inductor/140979 2025-03-17T17:41:36.3363066Z * [new tag] ciflow/inductor/141096 -> ciflow/inductor/141096 2025-03-17T17:41:36.3363709Z * [new tag] ciflow/inductor/141097 -> ciflow/inductor/141097 2025-03-17T17:41:36.3364385Z * [new tag] ciflow/inductor/141309 -> ciflow/inductor/141309 2025-03-17T17:41:36.3365031Z * [new tag] ciflow/inductor/141641 -> ciflow/inductor/141641 2025-03-17T17:41:36.3365724Z * [new tag] ciflow/inductor/141684 -> ciflow/inductor/141684 2025-03-17T17:41:36.3366364Z * [new tag] ciflow/inductor/141700 -> ciflow/inductor/141700 2025-03-17T17:41:36.3367233Z * [new tag] ciflow/inductor/141730 -> ciflow/inductor/141730 2025-03-17T17:41:36.3367823Z * [new tag] ciflow/inductor/141842 -> ciflow/inductor/141842 2025-03-17T17:41:36.3368427Z * [new tag] ciflow/inductor/141940 -> ciflow/inductor/141940 2025-03-17T17:41:36.3369119Z * [new tag] ciflow/inductor/141944 -> ciflow/inductor/141944 2025-03-17T17:41:36.3369759Z * [new tag] ciflow/inductor/141961 -> ciflow/inductor/141961 2025-03-17T17:41:36.3370776Z * [new tag] ciflow/inductor/142272 -> ciflow/inductor/142272 2025-03-17T17:41:36.3371342Z * [new tag] ciflow/inductor/142295 -> ciflow/inductor/142295 2025-03-17T17:41:36.3372205Z * [new tag] ciflow/inductor/142309 -> ciflow/inductor/142309 2025-03-17T17:41:36.3372766Z * [new tag] ciflow/inductor/142372 -> ciflow/inductor/142372 2025-03-17T17:41:36.3373454Z * [new tag] ciflow/inductor/142851 -> ciflow/inductor/142851 2025-03-17T17:41:36.3374333Z * [new tag] ciflow/inductor/143256 -> ciflow/inductor/143256 2025-03-17T17:41:36.3374960Z * [new tag] ciflow/inductor/143313 -> ciflow/inductor/143313 2025-03-17T17:41:36.3375581Z * [new tag] ciflow/inductor/143411 -> ciflow/inductor/143411 2025-03-17T17:41:36.3376215Z * [new tag] ciflow/inductor/143457 -> ciflow/inductor/143457 2025-03-17T17:41:36.3377336Z * [new tag] ciflow/inductor/143464 -> ciflow/inductor/143464 2025-03-17T17:41:36.3378033Z * [new tag] ciflow/inductor/143475 -> ciflow/inductor/143475 2025-03-17T17:41:36.3378762Z * [new tag] ciflow/inductor/143525 -> ciflow/inductor/143525 2025-03-17T17:41:36.3379539Z * [new tag] ciflow/inductor/143527 -> ciflow/inductor/143527 2025-03-17T17:41:36.3380209Z * [new tag] ciflow/inductor/143533 -> ciflow/inductor/143533 2025-03-17T17:41:36.3380852Z * [new tag] ciflow/inductor/143534 -> ciflow/inductor/143534 2025-03-17T17:41:36.3381704Z * [new tag] ciflow/inductor/143544 -> ciflow/inductor/143544 2025-03-17T17:41:36.3382443Z * [new tag] ciflow/inductor/143666 -> ciflow/inductor/143666 2025-03-17T17:41:36.3383091Z * [new tag] ciflow/inductor/143671 -> ciflow/inductor/143671 2025-03-17T17:41:36.3383857Z * [new tag] ciflow/inductor/143712 -> ciflow/inductor/143712 2025-03-17T17:41:36.3384454Z * [new tag] ciflow/inductor/143812 -> ciflow/inductor/143812 2025-03-17T17:41:36.3385204Z * [new tag] ciflow/inductor/143833 -> ciflow/inductor/143833 2025-03-17T17:41:36.3385912Z * [new tag] ciflow/inductor/143961 -> ciflow/inductor/143961 2025-03-17T17:41:36.3386947Z * [new tag] ciflow/inductor/143987 -> ciflow/inductor/143987 2025-03-17T17:41:36.3387545Z * [new tag] ciflow/inductor/144008 -> ciflow/inductor/144008 2025-03-17T17:41:36.3388420Z * [new tag] ciflow/inductor/144017 -> ciflow/inductor/144017 2025-03-17T17:41:36.3389028Z * [new tag] ciflow/inductor/144073 -> ciflow/inductor/144073 2025-03-17T17:41:36.3389968Z * [new tag] ciflow/inductor/144120 -> ciflow/inductor/144120 2025-03-17T17:41:36.3390874Z * [new tag] ciflow/inductor/144234 -> ciflow/inductor/144234 2025-03-17T17:41:36.3391659Z * [new tag] ciflow/inductor/144272 -> ciflow/inductor/144272 2025-03-17T17:41:36.3392351Z * [new tag] ciflow/inductor/144288 -> ciflow/inductor/144288 2025-03-17T17:41:36.3392995Z * [new tag] ciflow/inductor/144293 -> ciflow/inductor/144293 2025-03-17T17:41:36.3394141Z * [new tag] ciflow/inductor/144294 -> ciflow/inductor/144294 2025-03-17T17:41:36.3394734Z * [new tag] ciflow/inductor/144332 -> ciflow/inductor/144332 2025-03-17T17:41:36.3395420Z * [new tag] ciflow/inductor/144333 -> ciflow/inductor/144333 2025-03-17T17:41:36.3396047Z * [new tag] ciflow/inductor/144353 -> ciflow/inductor/144353 2025-03-17T17:41:36.3396709Z * [new tag] ciflow/inductor/144365 -> ciflow/inductor/144365 2025-03-17T17:41:36.3397349Z * [new tag] ciflow/inductor/144366 -> ciflow/inductor/144366 2025-03-17T17:41:36.3398019Z * [new tag] ciflow/inductor/144405 -> ciflow/inductor/144405 2025-03-17T17:41:36.3398666Z * [new tag] ciflow/inductor/144438 -> ciflow/inductor/144438 2025-03-17T17:41:36.3399352Z * [new tag] ciflow/inductor/144452 -> ciflow/inductor/144452 2025-03-17T17:41:36.3400000Z * [new tag] ciflow/inductor/144458 -> ciflow/inductor/144458 2025-03-17T17:41:36.3400749Z * [new tag] ciflow/inductor/144501 -> ciflow/inductor/144501 2025-03-17T17:41:36.3401484Z * [new tag] ciflow/inductor/144505 -> ciflow/inductor/144505 2025-03-17T17:41:36.3402125Z * [new tag] ciflow/inductor/144507 -> ciflow/inductor/144507 2025-03-17T17:41:36.3402790Z * [new tag] ciflow/inductor/144516 -> ciflow/inductor/144516 2025-03-17T17:41:36.3403425Z * [new tag] ciflow/inductor/144542 -> ciflow/inductor/144542 2025-03-17T17:41:36.3404118Z * [new tag] ciflow/inductor/144548 -> ciflow/inductor/144548 2025-03-17T17:41:36.3404726Z * [new tag] ciflow/inductor/144551 -> ciflow/inductor/144551 2025-03-17T17:41:36.3405468Z * [new tag] ciflow/inductor/144553 -> ciflow/inductor/144553 2025-03-17T17:41:36.3406054Z * [new tag] ciflow/inductor/144555 -> ciflow/inductor/144555 2025-03-17T17:41:36.3406722Z * [new tag] ciflow/inductor/144556 -> ciflow/inductor/144556 2025-03-17T17:41:36.3407381Z * [new tag] ciflow/inductor/144579 -> ciflow/inductor/144579 2025-03-17T17:41:36.3408608Z * [new tag] ciflow/inductor/144598 -> ciflow/inductor/144598 2025-03-17T17:41:36.3409229Z * [new tag] ciflow/inductor/144712 -> ciflow/inductor/144712 2025-03-17T17:41:36.3409851Z * [new tag] ciflow/inductor/144721 -> ciflow/inductor/144721 2025-03-17T17:41:36.3410770Z * [new tag] ciflow/inductor/144724 -> ciflow/inductor/144724 2025-03-17T17:41:36.3411360Z * [new tag] ciflow/inductor/144765 -> ciflow/inductor/144765 2025-03-17T17:41:36.3412167Z * [new tag] ciflow/inductor/144771 -> ciflow/inductor/144771 2025-03-17T17:41:36.3413068Z * [new tag] ciflow/inductor/144880 -> ciflow/inductor/144880 2025-03-17T17:41:36.3413670Z * [new tag] ciflow/inductor/144905 -> ciflow/inductor/144905 2025-03-17T17:41:36.3414299Z * [new tag] ciflow/inductor/144925 -> ciflow/inductor/144925 2025-03-17T17:41:36.3414975Z * [new tag] ciflow/inductor/144943 -> ciflow/inductor/144943 2025-03-17T17:41:36.3415739Z * [new tag] ciflow/inductor/144953 -> ciflow/inductor/144953 2025-03-17T17:41:36.3416393Z * [new tag] ciflow/inductor/144975 -> ciflow/inductor/144975 2025-03-17T17:41:36.3417076Z * [new tag] ciflow/inductor/144979 -> ciflow/inductor/144979 2025-03-17T17:41:36.3417719Z * [new tag] ciflow/inductor/144986 -> ciflow/inductor/144986 2025-03-17T17:41:36.3418511Z * [new tag] ciflow/inductor/144992 -> ciflow/inductor/144992 2025-03-17T17:41:36.3419144Z * [new tag] ciflow/inductor/145024 -> ciflow/inductor/145024 2025-03-17T17:41:36.3419813Z * [new tag] ciflow/inductor/145061 -> ciflow/inductor/145061 2025-03-17T17:41:36.3420440Z * [new tag] ciflow/inductor/145117 -> ciflow/inductor/145117 2025-03-17T17:41:36.3421104Z * [new tag] ciflow/inductor/145119 -> ciflow/inductor/145119 2025-03-17T17:41:36.3421881Z * [new tag] ciflow/inductor/145130 -> ciflow/inductor/145130 2025-03-17T17:41:36.3422537Z * [new tag] ciflow/inductor/145150 -> ciflow/inductor/145150 2025-03-17T17:41:36.3423483Z * [new tag] ciflow/inductor/145153 -> ciflow/inductor/145153 2025-03-17T17:41:36.3424078Z * [new tag] ciflow/inductor/145254 -> ciflow/inductor/145254 2025-03-17T17:41:36.3424748Z * [new tag] ciflow/inductor/145331 -> ciflow/inductor/145331 2025-03-17T17:41:36.3425410Z * [new tag] ciflow/inductor/145353 -> ciflow/inductor/145353 2025-03-17T17:41:36.3426082Z * [new tag] ciflow/inductor/145475 -> ciflow/inductor/145475 2025-03-17T17:41:36.3426774Z * [new tag] ciflow/inductor/145523 -> ciflow/inductor/145523 2025-03-17T17:41:36.3427461Z * [new tag] ciflow/inductor/145540 -> ciflow/inductor/145540 2025-03-17T17:41:36.3428217Z * [new tag] ciflow/inductor/145559 -> ciflow/inductor/145559 2025-03-17T17:41:36.3428850Z * [new tag] ciflow/inductor/145562 -> ciflow/inductor/145562 2025-03-17T17:41:36.3429514Z * [new tag] ciflow/inductor/145594 -> ciflow/inductor/145594 2025-03-17T17:41:36.3446572Z * [new tag] ciflow/inductor/145595 -> ciflow/inductor/145595 2025-03-17T17:41:36.3447014Z * [new tag] ciflow/inductor/145605 -> ciflow/inductor/145605 2025-03-17T17:41:36.3447232Z * [new tag] ciflow/inductor/145612 -> ciflow/inductor/145612 2025-03-17T17:41:36.3447446Z * [new tag] ciflow/inductor/145636 -> ciflow/inductor/145636 2025-03-17T17:41:36.3447644Z * [new tag] ciflow/inductor/145647 -> ciflow/inductor/145647 2025-03-17T17:41:36.3447946Z * [new tag] ciflow/inductor/145681 -> ciflow/inductor/145681 2025-03-17T17:41:36.3448147Z * [new tag] ciflow/inductor/145847 -> ciflow/inductor/145847 2025-03-17T17:41:36.3448379Z * [new tag] ciflow/inductor/145865 -> ciflow/inductor/145865 2025-03-17T17:41:36.3448585Z * [new tag] ciflow/inductor/145885 -> ciflow/inductor/145885 2025-03-17T17:41:36.3448795Z * [new tag] ciflow/inductor/145911 -> ciflow/inductor/145911 2025-03-17T17:41:36.3448994Z * [new tag] ciflow/inductor/145922 -> ciflow/inductor/145922 2025-03-17T17:41:36.3449259Z * [new tag] ciflow/inductor/145936 -> ciflow/inductor/145936 2025-03-17T17:41:36.3449457Z * [new tag] ciflow/inductor/145969 -> ciflow/inductor/145969 2025-03-17T17:41:36.3449666Z * [new tag] ciflow/inductor/145979 -> ciflow/inductor/145979 2025-03-17T17:41:36.3449913Z * [new tag] ciflow/inductor/145992 -> ciflow/inductor/145992 2025-03-17T17:41:36.3450125Z * [new tag] ciflow/inductor/146051 -> ciflow/inductor/146051 2025-03-17T17:41:36.3450319Z * [new tag] ciflow/inductor/146063 -> ciflow/inductor/146063 2025-03-17T17:41:36.3450584Z * [new tag] ciflow/inductor/146101 -> ciflow/inductor/146101 2025-03-17T17:41:36.3450781Z * [new tag] ciflow/inductor/146115 -> ciflow/inductor/146115 2025-03-17T17:41:36.3450986Z * [new tag] ciflow/inductor/146135 -> ciflow/inductor/146135 2025-03-17T17:41:36.3451309Z * [new tag] ciflow/inductor/146171 -> ciflow/inductor/146171 2025-03-17T17:41:36.3451518Z * [new tag] ciflow/inductor/146172 -> ciflow/inductor/146172 2025-03-17T17:41:36.3451714Z * [new tag] ciflow/inductor/146176 -> ciflow/inductor/146176 2025-03-17T17:41:36.3451981Z * [new tag] ciflow/inductor/146180 -> ciflow/inductor/146180 2025-03-17T17:41:36.3452177Z * [new tag] ciflow/inductor/146218 -> ciflow/inductor/146218 2025-03-17T17:41:36.3452391Z * [new tag] ciflow/inductor/146228 -> ciflow/inductor/146228 2025-03-17T17:41:36.3452686Z * [new tag] ciflow/inductor/146264 -> ciflow/inductor/146264 2025-03-17T17:41:36.3452901Z * [new tag] ciflow/inductor/146267 -> ciflow/inductor/146267 2025-03-17T17:41:36.3453097Z * [new tag] ciflow/inductor/146275 -> ciflow/inductor/146275 2025-03-17T17:41:36.3453313Z * [new tag] ciflow/inductor/146280 -> ciflow/inductor/146280 2025-03-17T17:41:36.3453538Z * [new tag] ciflow/inductor/146288 -> ciflow/inductor/146288 2025-03-17T17:41:36.3453762Z * [new tag] ciflow/inductor/146319 -> ciflow/inductor/146319 2025-03-17T17:41:36.3453962Z * [new tag] ciflow/inductor/146335 -> ciflow/inductor/146335 2025-03-17T17:41:36.3454175Z * [new tag] ciflow/inductor/146341 -> ciflow/inductor/146341 2025-03-17T17:41:36.3454752Z * [new tag] ciflow/inductor/146395 -> ciflow/inductor/146395 2025-03-17T17:41:36.3455208Z * [new tag] ciflow/inductor/146415 -> ciflow/inductor/146415 2025-03-17T17:41:36.3455866Z * [new tag] ciflow/inductor/146421 -> ciflow/inductor/146421 2025-03-17T17:41:36.3456543Z * [new tag] ciflow/inductor/146436 -> ciflow/inductor/146436 2025-03-17T17:41:36.3457281Z * [new tag] ciflow/inductor/146500 -> ciflow/inductor/146500 2025-03-17T17:41:36.3457862Z * [new tag] ciflow/inductor/146501 -> ciflow/inductor/146501 2025-03-17T17:41:36.3458508Z * [new tag] ciflow/inductor/146505 -> ciflow/inductor/146505 2025-03-17T17:41:36.3459184Z * [new tag] ciflow/inductor/146506 -> ciflow/inductor/146506 2025-03-17T17:41:36.3459826Z * [new tag] ciflow/inductor/146526 -> ciflow/inductor/146526 2025-03-17T17:41:36.3461024Z * [new tag] ciflow/inductor/146530 -> ciflow/inductor/146530 2025-03-17T17:41:36.3461651Z * [new tag] ciflow/inductor/146535 -> ciflow/inductor/146535 2025-03-17T17:41:36.3462307Z * [new tag] ciflow/inductor/146558 -> ciflow/inductor/146558 2025-03-17T17:41:36.3462976Z * [new tag] ciflow/inductor/146561 -> ciflow/inductor/146561 2025-03-17T17:41:36.3463628Z * [new tag] ciflow/inductor/146562 -> ciflow/inductor/146562 2025-03-17T17:41:36.3464280Z * [new tag] ciflow/inductor/146661 -> ciflow/inductor/146661 2025-03-17T17:41:36.3464921Z * [new tag] ciflow/inductor/146678 -> ciflow/inductor/146678 2025-03-17T17:41:36.3465583Z * [new tag] ciflow/inductor/146706 -> ciflow/inductor/146706 2025-03-17T17:41:36.3466234Z * [new tag] ciflow/inductor/146718 -> ciflow/inductor/146718 2025-03-17T17:41:36.3467039Z * [new tag] ciflow/inductor/146779 -> ciflow/inductor/146779 2025-03-17T17:41:36.3468026Z * [new tag] ciflow/inductor/146781 -> ciflow/inductor/146781 2025-03-17T17:41:36.3468919Z * [new tag] ciflow/inductor/146823 -> ciflow/inductor/146823 2025-03-17T17:41:36.3469555Z * [new tag] ciflow/inductor/146826 -> ciflow/inductor/146826 2025-03-17T17:41:36.3470257Z * [new tag] ciflow/inductor/146827 -> ciflow/inductor/146827 2025-03-17T17:41:36.3471143Z * [new tag] ciflow/inductor/146844 -> ciflow/inductor/146844 2025-03-17T17:41:36.3471734Z * [new tag] ciflow/inductor/146845 -> ciflow/inductor/146845 2025-03-17T17:41:36.3472390Z * [new tag] ciflow/inductor/146850 -> ciflow/inductor/146850 2025-03-17T17:41:36.3473036Z * [new tag] ciflow/inductor/146864 -> ciflow/inductor/146864 2025-03-17T17:41:36.3474029Z * [new tag] ciflow/inductor/146874 -> ciflow/inductor/146874 2025-03-17T17:41:36.3474633Z * [new tag] ciflow/inductor/146894 -> ciflow/inductor/146894 2025-03-17T17:41:36.3475386Z * [new tag] ciflow/inductor/146895 -> ciflow/inductor/146895 2025-03-17T17:41:36.3476286Z * [new tag] ciflow/inductor/146919 -> ciflow/inductor/146919 2025-03-17T17:41:36.3476922Z * [new tag] ciflow/inductor/146921 -> ciflow/inductor/146921 2025-03-17T17:41:36.3477555Z * [new tag] ciflow/inductor/146928 -> ciflow/inductor/146928 2025-03-17T17:41:36.3478199Z * [new tag] ciflow/inductor/146935 -> ciflow/inductor/146935 2025-03-17T17:41:36.3478892Z * [new tag] ciflow/inductor/146942 -> ciflow/inductor/146942 2025-03-17T17:41:36.3479617Z * [new tag] ciflow/inductor/146962 -> ciflow/inductor/146962 2025-03-17T17:41:36.3480462Z * [new tag] ciflow/inductor/146983 -> ciflow/inductor/146983 2025-03-17T17:41:36.3481150Z * [new tag] ciflow/inductor/146989 -> ciflow/inductor/146989 2025-03-17T17:41:36.3482210Z * [new tag] ciflow/inductor/147007 -> ciflow/inductor/147007 2025-03-17T17:41:36.3482801Z * [new tag] ciflow/inductor/147021 -> ciflow/inductor/147021 2025-03-17T17:41:36.3483526Z * [new tag] ciflow/inductor/147036 -> ciflow/inductor/147036 2025-03-17T17:41:36.3484116Z * [new tag] ciflow/inductor/147049 -> ciflow/inductor/147049 2025-03-17T17:41:36.3484818Z * [new tag] ciflow/inductor/147105 -> ciflow/inductor/147105 2025-03-17T17:41:36.3485474Z * [new tag] ciflow/inductor/147146 -> ciflow/inductor/147146 2025-03-17T17:41:36.3486127Z * [new tag] ciflow/inductor/147155 -> ciflow/inductor/147155 2025-03-17T17:41:36.3486786Z * [new tag] ciflow/inductor/147178 -> ciflow/inductor/147178 2025-03-17T17:41:36.3487537Z * [new tag] ciflow/inductor/147225 -> ciflow/inductor/147225 2025-03-17T17:41:36.3488220Z * [new tag] ciflow/inductor/147229 -> ciflow/inductor/147229 2025-03-17T17:41:36.3488868Z * [new tag] ciflow/inductor/147269 -> ciflow/inductor/147269 2025-03-17T17:41:36.3489560Z * [new tag] ciflow/inductor/147272 -> ciflow/inductor/147272 2025-03-17T17:41:36.3490412Z * [new tag] ciflow/inductor/147314 -> ciflow/inductor/147314 2025-03-17T17:41:36.3491039Z * [new tag] ciflow/inductor/147315 -> ciflow/inductor/147315 2025-03-17T17:41:36.3491769Z * [new tag] ciflow/inductor/147341 -> ciflow/inductor/147341 2025-03-17T17:41:36.3492431Z * [new tag] ciflow/inductor/147360 -> ciflow/inductor/147360 2025-03-17T17:41:36.3493420Z * [new tag] ciflow/inductor/147368 -> ciflow/inductor/147368 2025-03-17T17:41:36.3493906Z * [new tag] ciflow/inductor/147410 -> ciflow/inductor/147410 2025-03-17T17:41:36.3494506Z * [new tag] ciflow/inductor/147414 -> ciflow/inductor/147414 2025-03-17T17:41:36.3495174Z * [new tag] ciflow/inductor/147415 -> ciflow/inductor/147415 2025-03-17T17:41:36.3495810Z * [new tag] ciflow/inductor/147422 -> ciflow/inductor/147422 2025-03-17T17:41:36.3496652Z * [new tag] ciflow/inductor/147445 -> ciflow/inductor/147445 2025-03-17T17:41:36.3497318Z * [new tag] ciflow/inductor/147452 -> ciflow/inductor/147452 2025-03-17T17:41:36.3497987Z * [new tag] ciflow/inductor/147481 -> ciflow/inductor/147481 2025-03-17T17:41:36.3498619Z * [new tag] ciflow/inductor/147498 -> ciflow/inductor/147498 2025-03-17T17:41:36.3499760Z * [new tag] ciflow/inductor/147514 -> ciflow/inductor/147514 2025-03-17T17:41:36.3500349Z * [new tag] ciflow/inductor/147528 -> ciflow/inductor/147528 2025-03-17T17:41:36.3501000Z * [new tag] ciflow/inductor/147562 -> ciflow/inductor/147562 2025-03-17T17:41:36.3501682Z * [new tag] ciflow/inductor/147583 -> ciflow/inductor/147583 2025-03-17T17:41:36.3502329Z * [new tag] ciflow/inductor/147592 -> ciflow/inductor/147592 2025-03-17T17:41:36.3503009Z * [new tag] ciflow/inductor/147603 -> ciflow/inductor/147603 2025-03-17T17:41:36.3503950Z * [new tag] ciflow/inductor/147656 -> ciflow/inductor/147656 2025-03-17T17:41:36.3504577Z * [new tag] ciflow/inductor/147745 -> ciflow/inductor/147745 2025-03-17T17:41:36.3505210Z * [new tag] ciflow/inductor/147790 -> ciflow/inductor/147790 2025-03-17T17:41:36.3505876Z * [new tag] ciflow/inductor/147797 -> ciflow/inductor/147797 2025-03-17T17:41:36.3506575Z * [new tag] ciflow/inductor/147800 -> ciflow/inductor/147800 2025-03-17T17:41:36.3507484Z * [new tag] ciflow/inductor/147821 -> ciflow/inductor/147821 2025-03-17T17:41:36.3508235Z * [new tag] ciflow/inductor/147870 -> ciflow/inductor/147870 2025-03-17T17:41:36.3508893Z * [new tag] ciflow/inductor/147881 -> ciflow/inductor/147881 2025-03-17T17:41:36.3509933Z * [new tag] ciflow/inductor/147899 -> ciflow/inductor/147899 2025-03-17T17:41:36.3510397Z * [new tag] ciflow/inductor/147902 -> ciflow/inductor/147902 2025-03-17T17:41:36.3511043Z * [new tag] ciflow/inductor/147903 -> ciflow/inductor/147903 2025-03-17T17:41:36.3512083Z * [new tag] ciflow/inductor/147908 -> ciflow/inductor/147908 2025-03-17T17:41:36.3512701Z * [new tag] ciflow/inductor/147910 -> ciflow/inductor/147910 2025-03-17T17:41:36.3513389Z * [new tag] ciflow/inductor/147915 -> ciflow/inductor/147915 2025-03-17T17:41:36.3514036Z * [new tag] ciflow/inductor/147917 -> ciflow/inductor/147917 2025-03-17T17:41:36.3514977Z * [new tag] ciflow/inductor/147927 -> ciflow/inductor/147927 2025-03-17T17:41:36.3515730Z * [new tag] ciflow/inductor/147960 -> ciflow/inductor/147960 2025-03-17T17:41:36.3516426Z * [new tag] ciflow/inductor/147962 -> ciflow/inductor/147962 2025-03-17T17:41:36.3517083Z * [new tag] ciflow/inductor/147990 -> ciflow/inductor/147990 2025-03-17T17:41:36.3517740Z * [new tag] ciflow/inductor/148008 -> ciflow/inductor/148008 2025-03-17T17:41:36.3518427Z * [new tag] ciflow/inductor/148010 -> ciflow/inductor/148010 2025-03-17T17:41:36.3519047Z * [new tag] ciflow/inductor/148046 -> ciflow/inductor/148046 2025-03-17T17:41:36.3519754Z * [new tag] ciflow/inductor/148063 -> ciflow/inductor/148063 2025-03-17T17:41:36.3520365Z * [new tag] ciflow/inductor/148091 -> ciflow/inductor/148091 2025-03-17T17:41:36.3521077Z * [new tag] ciflow/inductor/148092 -> ciflow/inductor/148092 2025-03-17T17:41:36.3521720Z * [new tag] ciflow/inductor/148104 -> ciflow/inductor/148104 2025-03-17T17:41:36.3522421Z * [new tag] ciflow/inductor/148130 -> ciflow/inductor/148130 2025-03-17T17:41:36.3523061Z * [new tag] ciflow/inductor/148131 -> ciflow/inductor/148131 2025-03-17T17:41:36.3523751Z * [new tag] ciflow/inductor/148132 -> ciflow/inductor/148132 2025-03-17T17:41:36.3524454Z * [new tag] ciflow/inductor/148160 -> ciflow/inductor/148160 2025-03-17T17:41:36.3525135Z * [new tag] ciflow/inductor/148163 -> ciflow/inductor/148163 2025-03-17T17:41:36.3526059Z * [new tag] ciflow/inductor/148173 -> ciflow/inductor/148173 2025-03-17T17:41:36.3526704Z * [new tag] ciflow/inductor/148174 -> ciflow/inductor/148174 2025-03-17T17:41:36.3527394Z * [new tag] ciflow/inductor/148176 -> ciflow/inductor/148176 2025-03-17T17:41:36.3528059Z * [new tag] ciflow/inductor/148186 -> ciflow/inductor/148186 2025-03-17T17:41:36.3528753Z * [new tag] ciflow/inductor/148202 -> ciflow/inductor/148202 2025-03-17T17:41:36.3529409Z * [new tag] ciflow/inductor/148206 -> ciflow/inductor/148206 2025-03-17T17:41:36.3530093Z * [new tag] ciflow/inductor/148209 -> ciflow/inductor/148209 2025-03-17T17:41:36.3530764Z * [new tag] ciflow/inductor/148210 -> ciflow/inductor/148210 2025-03-17T17:41:36.3531416Z * [new tag] ciflow/inductor/148234 -> ciflow/inductor/148234 2025-03-17T17:41:36.3532112Z * [new tag] ciflow/inductor/148235 -> ciflow/inductor/148235 2025-03-17T17:41:36.3532763Z * [new tag] ciflow/inductor/148236 -> ciflow/inductor/148236 2025-03-17T17:41:36.3533687Z * [new tag] ciflow/inductor/148294 -> ciflow/inductor/148294 2025-03-17T17:41:36.3534285Z * [new tag] ciflow/inductor/148328 -> ciflow/inductor/148328 2025-03-17T17:41:36.3535017Z * [new tag] ciflow/inductor/148357 -> ciflow/inductor/148357 2025-03-17T17:41:36.3535747Z * [new tag] ciflow/inductor/148358 -> ciflow/inductor/148358 2025-03-17T17:41:36.3536685Z * [new tag] ciflow/inductor/148380 -> ciflow/inductor/148380 2025-03-17T17:41:36.3544932Z * [new tag] ciflow/inductor/148408 -> ciflow/inductor/148408 2025-03-17T17:41:36.3545686Z * [new tag] ciflow/inductor/148413 -> ciflow/inductor/148413 2025-03-17T17:41:36.3546430Z * [new tag] ciflow/inductor/148414 -> ciflow/inductor/148414 2025-03-17T17:41:36.3547148Z * [new tag] ciflow/inductor/148415 -> ciflow/inductor/148415 2025-03-17T17:41:36.3548158Z * [new tag] ciflow/inductor/148418 -> ciflow/inductor/148418 2025-03-17T17:41:36.3548698Z * [new tag] ciflow/inductor/148424 -> ciflow/inductor/148424 2025-03-17T17:41:36.3549414Z * [new tag] ciflow/inductor/148430 -> ciflow/inductor/148430 2025-03-17T17:41:36.3550316Z * [new tag] ciflow/inductor/148445 -> ciflow/inductor/148445 2025-03-17T17:41:36.3551049Z * [new tag] ciflow/inductor/148452 -> ciflow/inductor/148452 2025-03-17T17:41:36.3551720Z * [new tag] ciflow/inductor/148459 -> ciflow/inductor/148459 2025-03-17T17:41:36.3552412Z * [new tag] ciflow/inductor/148461 -> ciflow/inductor/148461 2025-03-17T17:41:36.3553619Z * [new tag] ciflow/inductor/148484 -> ciflow/inductor/148484 2025-03-17T17:41:36.3554231Z * [new tag] ciflow/inductor/148485 -> ciflow/inductor/148485 2025-03-17T17:41:36.3554918Z * [new tag] ciflow/inductor/148488 -> ciflow/inductor/148488 2025-03-17T17:41:36.3555612Z * [new tag] ciflow/inductor/148492 -> ciflow/inductor/148492 2025-03-17T17:41:36.3556623Z * [new tag] ciflow/inductor/148502 -> ciflow/inductor/148502 2025-03-17T17:41:36.3557600Z * [new tag] ciflow/inductor/148503 -> ciflow/inductor/148503 2025-03-17T17:41:36.3558160Z * [new tag] ciflow/inductor/148508 -> ciflow/inductor/148508 2025-03-17T17:41:36.3558844Z * [new tag] ciflow/inductor/148516 -> ciflow/inductor/148516 2025-03-17T17:41:36.3559790Z * [new tag] ciflow/inductor/148517 -> ciflow/inductor/148517 2025-03-17T17:41:36.3560523Z * [new tag] ciflow/inductor/148529 -> ciflow/inductor/148529 2025-03-17T17:41:36.3561193Z * [new tag] ciflow/inductor/148554 -> ciflow/inductor/148554 2025-03-17T17:41:36.3561892Z * [new tag] ciflow/inductor/148569 -> ciflow/inductor/148569 2025-03-17T17:41:36.3562562Z * [new tag] ciflow/inductor/148580 -> ciflow/inductor/148580 2025-03-17T17:41:36.3563325Z * [new tag] ciflow/inductor/148613 -> ciflow/inductor/148613 2025-03-17T17:41:36.3563979Z * [new tag] ciflow/inductor/148618 -> ciflow/inductor/148618 2025-03-17T17:41:36.3564669Z * [new tag] ciflow/inductor/148622 -> ciflow/inductor/148622 2025-03-17T17:41:36.3565337Z * [new tag] ciflow/inductor/148630 -> ciflow/inductor/148630 2025-03-17T17:41:36.3566046Z * [new tag] ciflow/inductor/148637 -> ciflow/inductor/148637 2025-03-17T17:41:36.3567020Z * [new tag] ciflow/inductor/148638 -> ciflow/inductor/148638 2025-03-17T17:41:36.3567557Z * [new tag] ciflow/inductor/148684 -> ciflow/inductor/148684 2025-03-17T17:41:36.3568252Z * [new tag] ciflow/inductor/148692 -> ciflow/inductor/148692 2025-03-17T17:41:36.3569030Z * [new tag] ciflow/inductor/148694 -> ciflow/inductor/148694 2025-03-17T17:41:36.3570026Z * [new tag] ciflow/inductor/148704 -> ciflow/inductor/148704 2025-03-17T17:41:36.3570834Z * [new tag] ciflow/inductor/148708 -> ciflow/inductor/148708 2025-03-17T17:41:36.3571542Z * [new tag] ciflow/inductor/148710 -> ciflow/inductor/148710 2025-03-17T17:41:36.3572288Z * [new tag] ciflow/inductor/148712 -> ciflow/inductor/148712 2025-03-17T17:41:36.3573024Z * [new tag] ciflow/inductor/148729 -> ciflow/inductor/148729 2025-03-17T17:41:36.3573771Z * [new tag] ciflow/inductor/148731 -> ciflow/inductor/148731 2025-03-17T17:41:36.3574503Z * [new tag] ciflow/inductor/148736 -> ciflow/inductor/148736 2025-03-17T17:41:36.3575265Z * [new tag] ciflow/inductor/148742 -> ciflow/inductor/148742 2025-03-17T17:41:36.3575997Z * [new tag] ciflow/inductor/148765 -> ciflow/inductor/148765 2025-03-17T17:41:36.3576760Z * [new tag] ciflow/inductor/148766 -> ciflow/inductor/148766 2025-03-17T17:41:36.3577500Z * [new tag] ciflow/inductor/148772 -> ciflow/inductor/148772 2025-03-17T17:41:36.3578245Z * [new tag] ciflow/inductor/148773 -> ciflow/inductor/148773 2025-03-17T17:41:36.3578991Z * [new tag] ciflow/inductor/148780 -> ciflow/inductor/148780 2025-03-17T17:41:36.3579987Z * [new tag] ciflow/inductor/148804 -> ciflow/inductor/148804 2025-03-17T17:41:36.3580665Z * [new tag] ciflow/inductor/148834 -> ciflow/inductor/148834 2025-03-17T17:41:36.3581599Z * [new tag] ciflow/inductor/148844 -> ciflow/inductor/148844 2025-03-17T17:41:36.3582391Z * [new tag] ciflow/inductor/148878 -> ciflow/inductor/148878 2025-03-17T17:41:36.3583132Z * [new tag] ciflow/inductor/148890 -> ciflow/inductor/148890 2025-03-17T17:41:36.3583895Z * [new tag] ciflow/inductor/148893 -> ciflow/inductor/148893 2025-03-17T17:41:36.3584640Z * [new tag] ciflow/inductor/148894 -> ciflow/inductor/148894 2025-03-17T17:41:36.3585392Z * [new tag] ciflow/inductor/148896 -> ciflow/inductor/148896 2025-03-17T17:41:36.3586369Z * [new tag] ciflow/inductor/148898 -> ciflow/inductor/148898 2025-03-17T17:41:36.3587135Z * [new tag] ciflow/inductor/148922 -> ciflow/inductor/148922 2025-03-17T17:41:36.3587861Z * [new tag] ciflow/inductor/148932 -> ciflow/inductor/148932 2025-03-17T17:41:36.3588619Z * [new tag] ciflow/inductor/148947 -> ciflow/inductor/148947 2025-03-17T17:41:36.3589371Z * [new tag] ciflow/inductor/148953 -> ciflow/inductor/148953 2025-03-17T17:41:36.3590111Z * [new tag] ciflow/inductor/148962 -> ciflow/inductor/148962 2025-03-17T17:41:36.3590866Z * [new tag] ciflow/inductor/148991 -> ciflow/inductor/148991 2025-03-17T17:41:36.3591969Z * [new tag] ciflow/inductor/149027 -> ciflow/inductor/149027 2025-03-17T17:41:36.3592683Z * [new tag] ciflow/inductor/149031 -> ciflow/inductor/149031 2025-03-17T17:41:36.3593410Z * [new tag] ciflow/inductor/149039 -> ciflow/inductor/149039 2025-03-17T17:41:36.3594334Z * [new tag] ciflow/inductor/149041 -> ciflow/inductor/149041 2025-03-17T17:41:36.3595015Z * [new tag] ciflow/inductor/149052 -> ciflow/inductor/149052 2025-03-17T17:41:36.3595768Z * [new tag] ciflow/inductor/149054 -> ciflow/inductor/149054 2025-03-17T17:41:36.3596498Z * [new tag] ciflow/inductor/149055 -> ciflow/inductor/149055 2025-03-17T17:41:36.3597247Z * [new tag] ciflow/inductor/149066 -> ciflow/inductor/149066 2025-03-17T17:41:36.3598009Z * [new tag] ciflow/inductor/149067 -> ciflow/inductor/149067 2025-03-17T17:41:36.3598729Z * [new tag] ciflow/inductor/149068 -> ciflow/inductor/149068 2025-03-17T17:41:36.3599577Z * [new tag] ciflow/inductor/149072 -> ciflow/inductor/149072 2025-03-17T17:41:36.3600275Z * [new tag] ciflow/inductor/149084 -> ciflow/inductor/149084 2025-03-17T17:41:36.3601013Z * [new tag] ciflow/inductor/149087 -> ciflow/inductor/149087 2025-03-17T17:41:36.3602008Z * [new tag] ciflow/inductor/149103 -> ciflow/inductor/149103 2025-03-17T17:41:36.3603350Z * [new tag] ciflow/inductor/149136 -> ciflow/inductor/149136 2025-03-17T17:41:36.3604040Z * [new tag] ciflow/inductor/149140 -> ciflow/inductor/149140 2025-03-17T17:41:36.3604805Z * [new tag] ciflow/inductor/149148 -> ciflow/inductor/149148 2025-03-17T17:41:36.3605562Z * [new tag] ciflow/inductor/149149 -> ciflow/inductor/149149 2025-03-17T17:41:36.3606348Z * [new tag] ciflow/inductor/149154 -> ciflow/inductor/149154 2025-03-17T17:41:36.3607095Z * [new tag] ciflow/inductor/149161 -> ciflow/inductor/149161 2025-03-17T17:41:36.3607828Z * [new tag] ciflow/inductor/149167 -> ciflow/inductor/149167 2025-03-17T17:41:36.3608727Z * [new tag] ciflow/inductor/149172 -> ciflow/inductor/149172 2025-03-17T17:41:36.3609512Z * [new tag] ciflow/inductor/149173 -> ciflow/inductor/149173 2025-03-17T17:41:36.3610254Z * [new tag] ciflow/inductor/149176 -> ciflow/inductor/149176 2025-03-17T17:41:36.3611300Z * [new tag] ciflow/inductor/149178 -> ciflow/inductor/149178 2025-03-17T17:41:36.3612318Z * [new tag] ciflow/inductor/149185 -> ciflow/inductor/149185 2025-03-17T17:41:36.3613030Z * [new tag] ciflow/inductor/149192 -> ciflow/inductor/149192 2025-03-17T17:41:36.3613767Z * [new tag] ciflow/inductor/149197 -> ciflow/inductor/149197 2025-03-17T17:41:36.3614791Z * [new tag] ciflow/inductor/149198 -> ciflow/inductor/149198 2025-03-17T17:41:36.3615454Z * [new tag] ciflow/inductor/149210 -> ciflow/inductor/149210 2025-03-17T17:41:36.3616242Z * [new tag] ciflow/inductor/149211 -> ciflow/inductor/149211 2025-03-17T17:41:36.3616967Z * [new tag] ciflow/inductor/149214 -> ciflow/inductor/149214 2025-03-17T17:41:36.3617701Z * [new tag] ciflow/inductor/149215 -> ciflow/inductor/149215 2025-03-17T17:41:36.3618447Z * [new tag] ciflow/inductor/149229 -> ciflow/inductor/149229 2025-03-17T17:41:36.3619424Z * [new tag] ciflow/inductor/149239 -> ciflow/inductor/149239 2025-03-17T17:41:36.3620190Z * [new tag] ciflow/inductor/149241 -> ciflow/inductor/149241 2025-03-17T17:41:36.3620950Z * [new tag] ciflow/inductor/149247 -> ciflow/inductor/149247 2025-03-17T17:41:36.3621695Z * [new tag] ciflow/inductor/149249 -> ciflow/inductor/149249 2025-03-17T17:41:36.3622465Z * [new tag] ciflow/inductor/149253 -> ciflow/inductor/149253 2025-03-17T17:41:36.3623199Z * [new tag] ciflow/inductor/149266 -> ciflow/inductor/149266 2025-03-17T17:41:36.3623933Z * [new tag] ciflow/inductor/149267 -> ciflow/inductor/149267 2025-03-17T17:41:36.3624696Z * [new tag] ciflow/inductor/149287 -> ciflow/inductor/149287 2025-03-17T17:41:36.3625422Z * [new tag] ciflow/inductor/149288 -> ciflow/inductor/149288 2025-03-17T17:41:36.3626199Z * [new tag] ciflow/inductor/149297 -> ciflow/inductor/149297 2025-03-17T17:41:36.3627167Z * [new tag] ciflow/inductor/149298 -> ciflow/inductor/149298 2025-03-17T17:41:36.3627861Z * [new tag] ciflow/inductor/149321 -> ciflow/inductor/149321 2025-03-17T17:41:36.3628957Z * [new tag] ciflow/inductor/3b9a386 -> ciflow/inductor/3b9a386 2025-03-17T17:41:36.3629766Z * [new tag] ciflow/inductor/3d4b92b -> ciflow/inductor/3d4b92b 2025-03-17T17:41:36.3630703Z * [new tag] ciflow/inductor/88106 -> ciflow/inductor/88106 2025-03-17T17:41:36.3631657Z * [new tag] ciflow/inductor/88196 -> ciflow/inductor/88196 2025-03-17T17:41:36.3632657Z * [new tag] ciflow/inductor/88998 -> ciflow/inductor/88998 2025-03-17T17:41:36.3633563Z * [new tag] ciflow/inductor/d224ac7 -> ciflow/inductor/d224ac7 2025-03-17T17:41:36.3634330Z * [new tag] ciflow/linux-aarch64/125888 -> ciflow/linux-aarch64/125888 2025-03-17T17:41:36.3634940Z * [new tag] ciflow/linux-aarch64/126050 -> ciflow/linux-aarch64/126050 2025-03-17T17:41:36.3635569Z * [new tag] ciflow/linux-aarch64/126054 -> ciflow/linux-aarch64/126054 2025-03-17T17:41:36.3636173Z * [new tag] ciflow/linux-aarch64/133297 -> ciflow/linux-aarch64/133297 2025-03-17T17:41:36.3636966Z * [new tag] ciflow/linux-aarch64/133315 -> ciflow/linux-aarch64/133315 2025-03-17T17:41:36.3637636Z * [new tag] ciflow/linux-aarch64/133392 -> ciflow/linux-aarch64/133392 2025-03-17T17:41:36.3638261Z * [new tag] ciflow/linux-aarch64/133419 -> ciflow/linux-aarch64/133419 2025-03-17T17:41:36.3638882Z * [new tag] ciflow/linux-aarch64/133423 -> ciflow/linux-aarch64/133423 2025-03-17T17:41:36.3639495Z * [new tag] ciflow/linux-aarch64/133667 -> ciflow/linux-aarch64/133667 2025-03-17T17:41:36.3640150Z * [new tag] ciflow/linux-aarch64/133753 -> ciflow/linux-aarch64/133753 2025-03-17T17:41:36.3640887Z * [new tag] ciflow/linux-aarch64/135058 -> ciflow/linux-aarch64/135058 2025-03-17T17:41:36.3641910Z * [new tag] ciflow/linux-aarch64/135792 -> ciflow/linux-aarch64/135792 2025-03-17T17:41:36.3642747Z * [new tag] ciflow/linux-aarch64/136355 -> ciflow/linux-aarch64/136355 2025-03-17T17:41:36.3643547Z * [new tag] ciflow/linux-aarch64/137568 -> ciflow/linux-aarch64/137568 2025-03-17T17:41:36.3644310Z * [new tag] ciflow/linux-aarch64/138388 -> ciflow/linux-aarch64/138388 2025-03-17T17:41:36.3644970Z * [new tag] ciflow/linux-aarch64/140159 -> ciflow/linux-aarch64/140159 2025-03-17T17:41:36.3645602Z * [new tag] ciflow/linux-aarch64/146823 -> ciflow/linux-aarch64/146823 2025-03-17T17:41:36.3646256Z * [new tag] ciflow/linux-aarch64/146826 -> ciflow/linux-aarch64/146826 2025-03-17T17:41:36.3646929Z * [new tag] ciflow/linux-aarch64/146895 -> ciflow/linux-aarch64/146895 2025-03-17T17:41:36.3647569Z * [new tag] ciflow/linux-aarch64/147073 -> ciflow/linux-aarch64/147073 2025-03-17T17:41:36.3648224Z * [new tag] ciflow/linux-aarch64/147341 -> ciflow/linux-aarch64/147341 2025-03-17T17:41:36.3648865Z * [new tag] ciflow/linux-aarch64/147359 -> ciflow/linux-aarch64/147359 2025-03-17T17:41:36.3649504Z * [new tag] ciflow/linux-aarch64/147498 -> ciflow/linux-aarch64/147498 2025-03-17T17:41:36.3650268Z * [new tag] ciflow/linux-aarch64/147763 -> ciflow/linux-aarch64/147763 2025-03-17T17:41:36.3650930Z * [new tag] ciflow/linux-aarch64/147855 -> ciflow/linux-aarch64/147855 2025-03-17T17:41:36.3651545Z * [new tag] ciflow/linux-aarch64/147917 -> ciflow/linux-aarch64/147917 2025-03-17T17:41:36.3652191Z * [new tag] ciflow/linux-aarch64/148163 -> ciflow/linux-aarch64/148163 2025-03-17T17:41:36.3652829Z * [new tag] ciflow/linux-aarch64/148173 -> ciflow/linux-aarch64/148173 2025-03-17T17:41:36.3653450Z * [new tag] ciflow/linux-aarch64/148424 -> ciflow/linux-aarch64/148424 2025-03-17T17:41:36.3654270Z * [new tag] ciflow/linux-aarch64/148585 -> ciflow/linux-aarch64/148585 2025-03-17T17:41:36.3654806Z * [new tag] ciflow/linux-aarch64/148653 -> ciflow/linux-aarch64/148653 2025-03-17T17:41:36.3655787Z * [new tag] ciflow/mps/102148 -> ciflow/mps/102148 2025-03-17T17:41:36.3656289Z * [new tag] ciflow/mps/119496 -> ciflow/mps/119496 2025-03-17T17:41:36.3657034Z * [new tag] ciflow/mps/120076 -> ciflow/mps/120076 2025-03-17T17:41:36.3657614Z * [new tag] ciflow/mps/133423 -> ciflow/mps/133423 2025-03-17T17:41:36.3658238Z * [new tag] ciflow/mps/133667 -> ciflow/mps/133667 2025-03-17T17:41:36.3659195Z * [new tag] ciflow/mps/138640 -> ciflow/mps/138640 2025-03-17T17:41:36.3659783Z * [new tag] ciflow/mps/139469 -> ciflow/mps/139469 2025-03-17T17:41:36.3660402Z * [new tag] ciflow/mps/140159 -> ciflow/mps/140159 2025-03-17T17:41:36.3661071Z * [new tag] ciflow/mps/140211 -> ciflow/mps/140211 2025-03-17T17:41:36.3662170Z * [new tag] ciflow/mps/140725 -> ciflow/mps/140725 2025-03-17T17:41:36.3662839Z * [new tag] ciflow/mps/142097 -> ciflow/mps/142097 2025-03-17T17:41:36.3663889Z * [new tag] ciflow/mps/142202 -> ciflow/mps/142202 2025-03-17T17:41:36.3664752Z * [new tag] ciflow/mps/143630 -> ciflow/mps/143630 2025-03-17T17:41:36.3665488Z * [new tag] ciflow/mps/143666 -> ciflow/mps/143666 2025-03-17T17:41:36.3666396Z * [new tag] ciflow/mps/143911 -> ciflow/mps/143911 2025-03-17T17:41:36.3667046Z * [new tag] ciflow/mps/143966 -> ciflow/mps/143966 2025-03-17T17:41:36.3667664Z * [new tag] ciflow/mps/144405 -> ciflow/mps/144405 2025-03-17T17:41:36.3668312Z * [new tag] ciflow/mps/144664 -> ciflow/mps/144664 2025-03-17T17:41:36.3669519Z * [new tag] ciflow/mps/145955 -> ciflow/mps/145955 2025-03-17T17:41:36.3670075Z * [new tag] ciflow/mps/146436 -> ciflow/mps/146436 2025-03-17T17:41:36.3670925Z * [new tag] ciflow/mps/146754 -> ciflow/mps/146754 2025-03-17T17:41:36.3671474Z * [new tag] ciflow/mps/146989 -> ciflow/mps/146989 2025-03-17T17:41:36.3672645Z * [new tag] ciflow/mps/147583 -> ciflow/mps/147583 2025-03-17T17:41:36.3673560Z * [new tag] ciflow/mps/147644 -> ciflow/mps/147644 2025-03-17T17:41:36.3674130Z * [new tag] ciflow/mps/147893 -> ciflow/mps/147893 2025-03-17T17:41:36.3674767Z * [new tag] ciflow/mps/148408 -> ciflow/mps/148408 2025-03-17T17:41:36.3675442Z * [new tag] ciflow/mps/148415 -> ciflow/mps/148415 2025-03-17T17:41:36.3676084Z * [new tag] ciflow/mps/149173 -> ciflow/mps/149173 2025-03-17T17:41:36.3676738Z * [new tag] ciflow/mps/149237 -> ciflow/mps/149237 2025-03-17T17:41:36.3677589Z * [new tag] ciflow/nightly/149192 -> ciflow/nightly/149192 2025-03-17T17:41:36.3678515Z * [new tag] ciflow/op-benchmark/143733 -> ciflow/op-benchmark/143733 2025-03-17T17:41:36.3679441Z * [new tag] ciflow/periodic/054a2fd -> ciflow/periodic/054a2fd 2025-03-17T17:41:36.3680160Z * [new tag] ciflow/periodic/123020 -> ciflow/periodic/123020 2025-03-17T17:41:36.3680800Z * [new tag] ciflow/periodic/140989 -> ciflow/periodic/140989 2025-03-17T17:41:36.3681435Z * [new tag] ciflow/periodic/141309 -> ciflow/periodic/141309 2025-03-17T17:41:36.3682092Z * [new tag] ciflow/periodic/141730 -> ciflow/periodic/141730 2025-03-17T17:41:36.3682781Z * [new tag] ciflow/periodic/142179 -> ciflow/periodic/142179 2025-03-17T17:41:36.3683434Z * [new tag] ciflow/periodic/143959 -> ciflow/periodic/143959 2025-03-17T17:41:36.3684033Z * [new tag] ciflow/periodic/144953 -> ciflow/periodic/144953 2025-03-17T17:41:36.3684655Z * [new tag] ciflow/periodic/145130 -> ciflow/periodic/145130 2025-03-17T17:41:36.3685246Z * [new tag] ciflow/periodic/146264 -> ciflow/periodic/146264 2025-03-17T17:41:36.3686062Z * [new tag] ciflow/periodic/146403 -> ciflow/periodic/146403 2025-03-17T17:41:36.3686961Z * [new tag] ciflow/periodic/146823 -> ciflow/periodic/146823 2025-03-17T17:41:36.3687942Z * [new tag] ciflow/periodic/146903 -> ciflow/periodic/146903 2025-03-17T17:41:36.3688667Z * [new tag] ciflow/periodic/147870 -> ciflow/periodic/147870 2025-03-17T17:41:36.3689437Z * [new tag] ciflow/periodic/148760 -> ciflow/periodic/148760 2025-03-17T17:41:36.3690611Z * [new tag] ciflow/periodic/149192 -> ciflow/periodic/149192 2025-03-17T17:41:36.3691391Z * [new tag] ciflow/periodic/2a6d37d -> ciflow/periodic/2a6d37d 2025-03-17T17:41:36.3692320Z * [new tag] ciflow/periodic/317eeb8 -> ciflow/periodic/317eeb8 2025-03-17T17:41:36.3693026Z * [new tag] ciflow/periodic/3c32 -> ciflow/periodic/3c32 2025-03-17T17:41:36.3693964Z * [new tag] ciflow/periodic/3e98831 -> ciflow/periodic/3e98831 2025-03-17T17:41:36.3694893Z * [new tag] ciflow/periodic/94512-point -> ciflow/periodic/94512-point 2025-03-17T17:41:36.3695953Z * [new tag] ciflow/periodic/csl/test87519 -> ciflow/periodic/csl/test87519 2025-03-17T17:41:36.3696720Z * [new tag] ciflow/periodic/csltest88275 -> ciflow/periodic/csltest88275 2025-03-17T17:41:36.3697562Z * [new tag] ciflow/periodic/csltest88761 -> ciflow/periodic/csltest88761 2025-03-17T17:41:36.3698436Z * [new tag] ciflow/periodic/release_1.12 -> ciflow/periodic/release_1.12 2025-03-17T17:41:36.3699364Z * [new tag] ciflow/periodic/release_1.12.0 -> ciflow/periodic/release_1.12.0 2025-03-17T17:41:36.3700309Z * [new tag] ciflow/periodic/sha-ec5b83 -> ciflow/periodic/sha-ec5b83 2025-03-17T17:41:36.3700969Z * [new tag] ciflow/rocm-mi300/148492 -> ciflow/rocm-mi300/148492 2025-03-17T17:41:36.3701597Z * [new tag] ciflow/rocm-mi300/148916 -> ciflow/rocm-mi300/148916 2025-03-17T17:41:36.3702508Z * [new tag] ciflow/rocm-mi300/148945 -> ciflow/rocm-mi300/148945 2025-03-17T17:41:36.3703165Z * [new tag] ciflow/rocm-mi300/149088 -> ciflow/rocm-mi300/149088 2025-03-17T17:41:36.3704074Z * [new tag] ciflow/rocm/124424 -> ciflow/rocm/124424 2025-03-17T17:41:36.3704686Z * [new tag] ciflow/rocm/139469 -> ciflow/rocm/139469 2025-03-17T17:41:36.3705210Z * [new tag] ciflow/rocm/139975 -> ciflow/rocm/139975 2025-03-17T17:41:36.3705821Z * [new tag] ciflow/rocm/140989 -> ciflow/rocm/140989 2025-03-17T17:41:36.3706487Z * [new tag] ciflow/rocm/141309 -> ciflow/rocm/141309 2025-03-17T17:41:36.3707160Z * [new tag] ciflow/rocm/142097 -> ciflow/rocm/142097 2025-03-17T17:41:36.3707774Z * [new tag] ciflow/rocm/142859 -> ciflow/rocm/142859 2025-03-17T17:41:36.3708416Z * [new tag] ciflow/rocm/143416 -> ciflow/rocm/143416 2025-03-17T17:41:36.3709101Z * [new tag] ciflow/rocm/143971 -> ciflow/rocm/143971 2025-03-17T17:41:36.3709710Z * [new tag] ciflow/rocm/144120 -> ciflow/rocm/144120 2025-03-17T17:41:36.3710659Z * [new tag] ciflow/rocm/144572 -> ciflow/rocm/144572 2025-03-17T17:41:36.3711590Z * [new tag] ciflow/rocm/144664 -> ciflow/rocm/144664 2025-03-17T17:41:36.3712329Z * [new tag] ciflow/rocm/145130 -> ciflow/rocm/145130 2025-03-17T17:41:36.3713207Z * [new tag] ciflow/rocm/145475 -> ciflow/rocm/145475 2025-03-17T17:41:36.3713931Z * [new tag] ciflow/rocm/145584 -> ciflow/rocm/145584 2025-03-17T17:41:36.3714573Z * [new tag] ciflow/rocm/145685 -> ciflow/rocm/145685 2025-03-17T17:41:36.3715235Z * [new tag] ciflow/rocm/146264 -> ciflow/rocm/146264 2025-03-17T17:41:36.3716097Z * [new tag] ciflow/rocm/146448 -> ciflow/rocm/146448 2025-03-17T17:41:36.3716692Z * [new tag] ciflow/rocm/146903 -> ciflow/rocm/146903 2025-03-17T17:41:36.3717343Z * [new tag] ciflow/rocm/147315 -> ciflow/rocm/147315 2025-03-17T17:41:36.3718077Z * [new tag] ciflow/rocm/147382 -> ciflow/rocm/147382 2025-03-17T17:41:36.3718755Z * [new tag] ciflow/rocm/147452 -> ciflow/rocm/147452 2025-03-17T17:41:36.3719612Z * [new tag] ciflow/rocm/147527 -> ciflow/rocm/147527 2025-03-17T17:41:36.3720230Z * [new tag] ciflow/rocm/147821 -> ciflow/rocm/147821 2025-03-17T17:41:36.3720840Z * [new tag] ciflow/rocm/148327 -> ciflow/rocm/148327 2025-03-17T17:41:36.3721787Z * [new tag] ciflow/rocm/148355 -> ciflow/rocm/148355 2025-03-17T17:41:36.3722316Z * [new tag] ciflow/rocm/148492 -> ciflow/rocm/148492 2025-03-17T17:41:36.3723016Z * [new tag] ciflow/rocm/148672 -> ciflow/rocm/148672 2025-03-17T17:41:36.3723644Z * [new tag] ciflow/rocm/148864 -> ciflow/rocm/148864 2025-03-17T17:41:36.3724282Z * [new tag] ciflow/rocm/148880 -> ciflow/rocm/148880 2025-03-17T17:41:36.3724956Z * [new tag] ciflow/rocm/148916 -> ciflow/rocm/148916 2025-03-17T17:41:36.3725606Z * [new tag] ciflow/rocm/148945 -> ciflow/rocm/148945 2025-03-17T17:41:36.3726276Z * [new tag] ciflow/rocm/149039 -> ciflow/rocm/149039 2025-03-17T17:41:36.3726901Z * [new tag] ciflow/rocm/149041 -> ciflow/rocm/149041 2025-03-17T17:41:36.3728124Z * [new tag] ciflow/rocm/149245 -> ciflow/rocm/149245 2025-03-17T17:41:36.3728993Z * [new tag] ciflow/s390/142346 -> ciflow/s390/142346 2025-03-17T17:41:36.3729528Z * [new tag] ciflow/s390/143959 -> ciflow/s390/143959 2025-03-17T17:41:36.3730155Z * [new tag] ciflow/s390/148452 -> ciflow/s390/148452 2025-03-17T17:41:36.3731147Z * [new tag] ciflow/slow/01c7106 -> ciflow/slow/01c7106 2025-03-17T17:41:36.3731808Z * [new tag] ciflow/slow/0577043 -> ciflow/slow/0577043 2025-03-17T17:41:36.3733117Z * [new tag] ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym -> ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym 2025-03-17T17:41:36.3733522Z * [new tag] ciflow/slow/0e81104 -> ciflow/slow/0e81104 2025-03-17T17:41:36.3734148Z * [new tag] ciflow/slow/139975 -> ciflow/slow/139975 2025-03-17T17:41:36.3734733Z * [new tag] ciflow/slow/146903 -> ciflow/slow/146903 2025-03-17T17:41:36.3735336Z * [new tag] ciflow/slow/149192 -> ciflow/slow/149192 2025-03-17T17:41:36.3736212Z * [new tag] ciflow/slow/1732077 -> ciflow/slow/1732077 2025-03-17T17:41:36.3737592Z * [new tag] ciflow/slow/187eb7c -> ciflow/slow/187eb7c 2025-03-17T17:41:36.3738932Z * [new tag] ciflow/slow/1faef89 -> ciflow/slow/1faef89 2025-03-17T17:41:36.3739692Z * [new tag] ciflow/slow/3920ec1 -> ciflow/slow/3920ec1 2025-03-17T17:41:36.3740366Z * [new tag] ciflow/slow/3b7c6b2 -> ciflow/slow/3b7c6b2 2025-03-17T17:41:36.3741294Z * [new tag] ciflow/slow/59a3759 -> ciflow/slow/59a3759 2025-03-17T17:41:36.3741986Z * [new tag] ciflow/slow/70ef0bb -> ciflow/slow/70ef0bb 2025-03-17T17:41:36.3742783Z * [new tag] ciflow/slow/788ff06 -> ciflow/slow/788ff06 2025-03-17T17:41:36.3744082Z * [new tag] ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym -> ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym 2025-03-17T17:41:36.3744477Z * [new tag] ciflow/slow/9d85864 -> ciflow/slow/9d85864 2025-03-17T17:41:36.3745379Z * [new tag] ciflow/slow/9ffad5b -> ciflow/slow/9ffad5b 2025-03-17T17:41:36.3746150Z * [new tag] ciflow/slow/a206e8b -> ciflow/slow/a206e8b 2025-03-17T17:41:36.3747150Z * [new tag] ciflow/slow/a837609 -> ciflow/slow/a837609 2025-03-17T17:41:36.3747902Z * [new tag] ciflow/slow/af841f3 -> ciflow/slow/af841f3 2025-03-17T17:41:36.3749249Z * [new tag] ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym -> ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym 2025-03-17T17:41:36.3749610Z * [new tag] ciflow/torchao/149192 -> ciflow/torchao/149192 2025-03-17T17:41:36.3750587Z * [new tag] ciflow/trunk/101814 -> ciflow/trunk/101814 2025-03-17T17:41:36.3751185Z * [new tag] ciflow/trunk/108303 -> ciflow/trunk/108303 2025-03-17T17:41:36.3751740Z * [new tag] ciflow/trunk/113257 -> ciflow/trunk/113257 2025-03-17T17:41:36.3752356Z * [new tag] ciflow/trunk/113258 -> ciflow/trunk/113258 2025-03-17T17:41:36.3753000Z * [new tag] ciflow/trunk/120076 -> ciflow/trunk/120076 2025-03-17T17:41:36.3753599Z * [new tag] ciflow/trunk/121445 -> ciflow/trunk/121445 2025-03-17T17:41:36.3754234Z * [new tag] ciflow/trunk/123020 -> ciflow/trunk/123020 2025-03-17T17:41:36.3754839Z * [new tag] ciflow/trunk/124424 -> ciflow/trunk/124424 2025-03-17T17:41:36.3755451Z * [new tag] ciflow/trunk/124490 -> ciflow/trunk/124490 2025-03-17T17:41:36.3756627Z * [new tag] ciflow/trunk/125806 -> ciflow/trunk/125806 2025-03-17T17:41:36.3757233Z * [new tag] ciflow/trunk/125888 -> ciflow/trunk/125888 2025-03-17T17:41:36.3758180Z * [new tag] ciflow/trunk/125995 -> ciflow/trunk/125995 2025-03-17T17:41:36.3758933Z * [new tag] ciflow/trunk/126050 -> ciflow/trunk/126050 2025-03-17T17:41:36.3759812Z * [new tag] ciflow/trunk/126054 -> ciflow/trunk/126054 2025-03-17T17:41:36.3760763Z * [new tag] ciflow/trunk/126635 -> ciflow/trunk/126635 2025-03-17T17:41:36.3761327Z * [new tag] ciflow/trunk/127171 -> ciflow/trunk/127171 2025-03-17T17:41:36.3762186Z * [new tag] ciflow/trunk/127919 -> ciflow/trunk/127919 2025-03-17T17:41:36.3762777Z * [new tag] ciflow/trunk/129352 -> ciflow/trunk/129352 2025-03-17T17:41:36.3763416Z * [new tag] ciflow/trunk/129420 -> ciflow/trunk/129420 2025-03-17T17:41:36.3764058Z * [new tag] ciflow/trunk/130141 -> ciflow/trunk/130141 2025-03-17T17:41:36.3764785Z * [new tag] ciflow/trunk/130522 -> ciflow/trunk/130522 2025-03-17T17:41:36.3765484Z * [new tag] ciflow/trunk/130752 -> ciflow/trunk/130752 2025-03-17T17:41:36.3766112Z * [new tag] ciflow/trunk/131354 -> ciflow/trunk/131354 2025-03-17T17:41:36.3766834Z * [new tag] ciflow/trunk/131507 -> ciflow/trunk/131507 2025-03-17T17:41:36.3767410Z * [new tag] ciflow/trunk/132021 -> ciflow/trunk/132021 2025-03-17T17:41:36.3768090Z * [new tag] ciflow/trunk/133044 -> ciflow/trunk/133044 2025-03-17T17:41:36.3768730Z * [new tag] ciflow/trunk/133289 -> ciflow/trunk/133289 2025-03-17T17:41:36.3769387Z * [new tag] ciflow/trunk/133296 -> ciflow/trunk/133296 2025-03-17T17:41:36.3770020Z * [new tag] ciflow/trunk/133297 -> ciflow/trunk/133297 2025-03-17T17:41:36.3770681Z * [new tag] ciflow/trunk/133315 -> ciflow/trunk/133315 2025-03-17T17:41:36.3771330Z * [new tag] ciflow/trunk/133392 -> ciflow/trunk/133392 2025-03-17T17:41:36.3771960Z * [new tag] ciflow/trunk/133419 -> ciflow/trunk/133419 2025-03-17T17:41:36.3772634Z * [new tag] ciflow/trunk/133423 -> ciflow/trunk/133423 2025-03-17T17:41:36.3773291Z * [new tag] ciflow/trunk/133667 -> ciflow/trunk/133667 2025-03-17T17:41:36.3773947Z * [new tag] ciflow/trunk/133753 -> ciflow/trunk/133753 2025-03-17T17:41:36.3774687Z * [new tag] ciflow/trunk/134219 -> ciflow/trunk/134219 2025-03-17T17:41:36.3775338Z * [new tag] ciflow/trunk/135058 -> ciflow/trunk/135058 2025-03-17T17:41:36.3776208Z * [new tag] ciflow/trunk/135631 -> ciflow/trunk/135631 2025-03-17T17:41:36.3776864Z * [new tag] ciflow/trunk/136780 -> ciflow/trunk/136780 2025-03-17T17:41:36.3777708Z * [new tag] ciflow/trunk/136824 -> ciflow/trunk/136824 2025-03-17T17:41:36.3778371Z * [new tag] ciflow/trunk/136835 -> ciflow/trunk/136835 2025-03-17T17:41:36.3779294Z * [new tag] ciflow/trunk/137400 -> ciflow/trunk/137400 2025-03-17T17:41:36.3779815Z * [new tag] ciflow/trunk/138436 -> ciflow/trunk/138436 2025-03-17T17:41:36.3780513Z * [new tag] ciflow/trunk/138626 -> ciflow/trunk/138626 2025-03-17T17:41:36.3781134Z * [new tag] ciflow/trunk/138834 -> ciflow/trunk/138834 2025-03-17T17:41:36.3781799Z * [new tag] ciflow/trunk/138996 -> ciflow/trunk/138996 2025-03-17T17:41:36.3782533Z * [new tag] ciflow/trunk/139070 -> ciflow/trunk/139070 2025-03-17T17:41:36.3783182Z * [new tag] ciflow/trunk/139094 -> ciflow/trunk/139094 2025-03-17T17:41:36.3784030Z * [new tag] ciflow/trunk/139171 -> ciflow/trunk/139171 2025-03-17T17:41:36.3784646Z * [new tag] ciflow/trunk/139971 -> ciflow/trunk/139971 2025-03-17T17:41:36.3785276Z * [new tag] ciflow/trunk/139975 -> ciflow/trunk/139975 2025-03-17T17:41:36.3785912Z * [new tag] ciflow/trunk/140159 -> ciflow/trunk/140159 2025-03-17T17:41:36.3786741Z * [new tag] ciflow/trunk/140200 -> ciflow/trunk/140200 2025-03-17T17:41:36.3787438Z * [new tag] ciflow/trunk/140211 -> ciflow/trunk/140211 2025-03-17T17:41:36.3788092Z * [new tag] ciflow/trunk/140298 -> ciflow/trunk/140298 2025-03-17T17:41:36.3788720Z * [new tag] ciflow/trunk/140323 -> ciflow/trunk/140323 2025-03-17T17:41:36.3789375Z * [new tag] ciflow/trunk/140365 -> ciflow/trunk/140365 2025-03-17T17:41:36.3790263Z * [new tag] ciflow/trunk/140399 -> ciflow/trunk/140399 2025-03-17T17:41:36.3790789Z * [new tag] ciflow/trunk/140756 -> ciflow/trunk/140756 2025-03-17T17:41:36.3791478Z * [new tag] ciflow/trunk/140979 -> ciflow/trunk/140979 2025-03-17T17:41:36.3792114Z * [new tag] ciflow/trunk/140989 -> ciflow/trunk/140989 2025-03-17T17:41:36.3792854Z * [new tag] ciflow/trunk/141309 -> ciflow/trunk/141309 2025-03-17T17:41:36.3793427Z * [new tag] ciflow/trunk/141730 -> ciflow/trunk/141730 2025-03-17T17:41:36.3794086Z * [new tag] ciflow/trunk/141842 -> ciflow/trunk/141842 2025-03-17T17:41:36.3794723Z * [new tag] ciflow/trunk/141910 -> ciflow/trunk/141910 2025-03-17T17:41:36.3795409Z * [new tag] ciflow/trunk/141961 -> ciflow/trunk/141961 2025-03-17T17:41:36.3796038Z * [new tag] ciflow/trunk/142097 -> ciflow/trunk/142097 2025-03-17T17:41:36.3796715Z * [new tag] ciflow/trunk/142179 -> ciflow/trunk/142179 2025-03-17T17:41:36.3797369Z * [new tag] ciflow/trunk/142272 -> ciflow/trunk/142272 2025-03-17T17:41:36.3798204Z * [new tag] ciflow/trunk/142326 -> ciflow/trunk/142326 2025-03-17T17:41:36.3798859Z * [new tag] ciflow/trunk/142346 -> ciflow/trunk/142346 2025-03-17T17:41:36.3799545Z * [new tag] ciflow/trunk/142372 -> ciflow/trunk/142372 2025-03-17T17:41:36.3800168Z * [new tag] ciflow/trunk/142859 -> ciflow/trunk/142859 2025-03-17T17:41:36.3801625Z * [new tag] ciflow/trunk/143093 -> ciflow/trunk/143093 2025-03-17T17:41:36.3802351Z * [new tag] ciflow/trunk/143261 -> ciflow/trunk/143261 2025-03-17T17:41:36.3803023Z * [new tag] ciflow/trunk/143313 -> ciflow/trunk/143313 2025-03-17T17:41:36.3803912Z * [new tag] ciflow/trunk/143347 -> ciflow/trunk/143347 2025-03-17T17:41:36.3804600Z * [new tag] ciflow/trunk/143402 -> ciflow/trunk/143402 2025-03-17T17:41:36.3805251Z * [new tag] ciflow/trunk/143416 -> ciflow/trunk/143416 2025-03-17T17:41:36.3806155Z * [new tag] ciflow/trunk/143451 -> ciflow/trunk/143451 2025-03-17T17:41:36.3806788Z * [new tag] ciflow/trunk/143475 -> ciflow/trunk/143475 2025-03-17T17:41:36.3807415Z * [new tag] ciflow/trunk/143630 -> ciflow/trunk/143630 2025-03-17T17:41:36.3808077Z * [new tag] ciflow/trunk/143666 -> ciflow/trunk/143666 2025-03-17T17:41:36.3808730Z * [new tag] ciflow/trunk/143671 -> ciflow/trunk/143671 2025-03-17T17:41:36.3809630Z * [new tag] ciflow/trunk/143689 -> ciflow/trunk/143689 2025-03-17T17:41:36.3810233Z * [new tag] ciflow/trunk/143712 -> ciflow/trunk/143712 2025-03-17T17:41:36.3811005Z * [new tag] ciflow/trunk/143822 -> ciflow/trunk/143822 2025-03-17T17:41:36.3811770Z * [new tag] ciflow/trunk/143833 -> ciflow/trunk/143833 2025-03-17T17:41:36.3812659Z * [new tag] ciflow/trunk/143894 -> ciflow/trunk/143894 2025-03-17T17:41:36.3813272Z * [new tag] ciflow/trunk/143896 -> ciflow/trunk/143896 2025-03-17T17:41:36.3813900Z * [new tag] ciflow/trunk/143961 -> ciflow/trunk/143961 2025-03-17T17:41:36.3814568Z * [new tag] ciflow/trunk/143966 -> ciflow/trunk/143966 2025-03-17T17:41:36.3815197Z * [new tag] ciflow/trunk/144017 -> ciflow/trunk/144017 2025-03-17T17:41:36.3815956Z * [new tag] ciflow/trunk/144019 -> ciflow/trunk/144019 2025-03-17T17:41:36.3816609Z * [new tag] ciflow/trunk/144120 -> ciflow/trunk/144120 2025-03-17T17:41:36.3817364Z * [new tag] ciflow/trunk/144138 -> ciflow/trunk/144138 2025-03-17T17:41:36.3818259Z * [new tag] ciflow/trunk/144177 -> ciflow/trunk/144177 2025-03-17T17:41:36.3819033Z * [new tag] ciflow/trunk/144268 -> ciflow/trunk/144268 2025-03-17T17:41:36.3819782Z * [new tag] ciflow/trunk/144272 -> ciflow/trunk/144272 2025-03-17T17:41:36.3820350Z * [new tag] ciflow/trunk/144293 -> ciflow/trunk/144293 2025-03-17T17:41:36.3821074Z * [new tag] ciflow/trunk/144452 -> ciflow/trunk/144452 2025-03-17T17:41:36.3821817Z * [new tag] ciflow/trunk/144468 -> ciflow/trunk/144468 2025-03-17T17:41:36.3822471Z * [new tag] ciflow/trunk/144557 -> ciflow/trunk/144557 2025-03-17T17:41:36.3823103Z * [new tag] ciflow/trunk/144572 -> ciflow/trunk/144572 2025-03-17T17:41:36.3824032Z * [new tag] ciflow/trunk/144616 -> ciflow/trunk/144616 2025-03-17T17:41:36.3824720Z * [new tag] ciflow/trunk/144621 -> ciflow/trunk/144621 2025-03-17T17:41:36.3825362Z * [new tag] ciflow/trunk/144664 -> ciflow/trunk/144664 2025-03-17T17:41:36.3826039Z * [new tag] ciflow/trunk/144721 -> ciflow/trunk/144721 2025-03-17T17:41:36.3826749Z * [new tag] ciflow/trunk/144771 -> ciflow/trunk/144771 2025-03-17T17:41:36.3827658Z * [new tag] ciflow/trunk/144844 -> ciflow/trunk/144844 2025-03-17T17:41:36.3828264Z * [new tag] ciflow/trunk/144880 -> ciflow/trunk/144880 2025-03-17T17:41:36.3828830Z * [new tag] ciflow/trunk/144925 -> ciflow/trunk/144925 2025-03-17T17:41:36.3829488Z * [new tag] ciflow/trunk/144953 -> ciflow/trunk/144953 2025-03-17T17:41:36.3830129Z * [new tag] ciflow/trunk/144975 -> ciflow/trunk/144975 2025-03-17T17:41:36.3830791Z * [new tag] ciflow/trunk/144992 -> ciflow/trunk/144992 2025-03-17T17:41:36.3831429Z * [new tag] ciflow/trunk/145061 -> ciflow/trunk/145061 2025-03-17T17:41:36.3832093Z * [new tag] ciflow/trunk/145116 -> ciflow/trunk/145116 2025-03-17T17:41:36.3832749Z * [new tag] ciflow/trunk/145119 -> ciflow/trunk/145119 2025-03-17T17:41:36.3833396Z * [new tag] ciflow/trunk/145130 -> ciflow/trunk/145130 2025-03-17T17:41:36.3834256Z * [new tag] ciflow/trunk/145136 -> ciflow/trunk/145136 2025-03-17T17:41:36.3834831Z * [new tag] ciflow/trunk/145153 -> ciflow/trunk/145153 2025-03-17T17:41:36.3835502Z * [new tag] ciflow/trunk/145224 -> ciflow/trunk/145224 2025-03-17T17:41:36.3836144Z * [new tag] ciflow/trunk/145241 -> ciflow/trunk/145241 2025-03-17T17:41:36.3836963Z * [new tag] ciflow/trunk/145254 -> ciflow/trunk/145254 2025-03-17T17:41:36.3837661Z * [new tag] ciflow/trunk/145331 -> ciflow/trunk/145331 2025-03-17T17:41:36.3838624Z * [new tag] ciflow/trunk/145406 -> ciflow/trunk/145406 2025-03-17T17:41:36.3839218Z * [new tag] ciflow/trunk/145523 -> ciflow/trunk/145523 2025-03-17T17:41:36.3839860Z * [new tag] ciflow/trunk/145559 -> ciflow/trunk/145559 2025-03-17T17:41:36.3840809Z * [new tag] ciflow/trunk/145600 -> ciflow/trunk/145600 2025-03-17T17:41:36.3841521Z * [new tag] ciflow/trunk/145677 -> ciflow/trunk/145677 2025-03-17T17:41:36.3842446Z * [new tag] ciflow/trunk/145719 -> ciflow/trunk/145719 2025-03-17T17:41:36.3843076Z * [new tag] ciflow/trunk/145936 -> ciflow/trunk/145936 2025-03-17T17:41:36.3843741Z * [new tag] ciflow/trunk/145979 -> ciflow/trunk/145979 2025-03-17T17:41:36.3844383Z * [new tag] ciflow/trunk/146051 -> ciflow/trunk/146051 2025-03-17T17:41:36.3845159Z * [new tag] ciflow/trunk/146090 -> ciflow/trunk/146090 2025-03-17T17:41:36.3845803Z * [new tag] ciflow/trunk/146115 -> ciflow/trunk/146115 2025-03-17T17:41:36.3846595Z * [new tag] ciflow/trunk/146135 -> ciflow/trunk/146135 2025-03-17T17:41:36.3847749Z * [new tag] ciflow/trunk/146176 -> ciflow/trunk/146176 2025-03-17T17:41:36.3848468Z * [new tag] ciflow/trunk/146182 -> ciflow/trunk/146182 2025-03-17T17:41:36.3849182Z * [new tag] ciflow/trunk/146275 -> ciflow/trunk/146275 2025-03-17T17:41:36.3849911Z * [new tag] ciflow/trunk/146289 -> ciflow/trunk/146289 2025-03-17T17:41:36.3850570Z * [new tag] ciflow/trunk/146335 -> ciflow/trunk/146335 2025-03-17T17:41:36.3851200Z * [new tag] ciflow/trunk/146421 -> ciflow/trunk/146421 2025-03-17T17:41:36.3852169Z * [new tag] ciflow/trunk/146489 -> ciflow/trunk/146489 2025-03-17T17:41:36.3852841Z * [new tag] ciflow/trunk/146517 -> ciflow/trunk/146517 2025-03-17T17:41:36.3853539Z * [new tag] ciflow/trunk/146530 -> ciflow/trunk/146530 2025-03-17T17:41:36.3854180Z * [new tag] ciflow/trunk/146561 -> ciflow/trunk/146561 2025-03-17T17:41:36.3854827Z * [new tag] ciflow/trunk/146562 -> ciflow/trunk/146562 2025-03-17T17:41:36.3855554Z * [new tag] ciflow/trunk/146573 -> ciflow/trunk/146573 2025-03-17T17:41:36.3856192Z * [new tag] ciflow/trunk/146661 -> ciflow/trunk/146661 2025-03-17T17:41:36.3856860Z * [new tag] ciflow/trunk/146706 -> ciflow/trunk/146706 2025-03-17T17:41:36.3857495Z * [new tag] ciflow/trunk/146718 -> ciflow/trunk/146718 2025-03-17T17:41:36.3858174Z * [new tag] ciflow/trunk/146777 -> ciflow/trunk/146777 2025-03-17T17:41:36.3859141Z * [new tag] ciflow/trunk/146807 -> ciflow/trunk/146807 2025-03-17T17:41:36.3859575Z * [new tag] ciflow/trunk/146823 -> ciflow/trunk/146823 2025-03-17T17:41:36.3860247Z * [new tag] ciflow/trunk/146826 -> ciflow/trunk/146826 2025-03-17T17:41:36.3860914Z * [new tag] ciflow/trunk/146827 -> ciflow/trunk/146827 2025-03-17T17:41:36.3861533Z * [new tag] ciflow/trunk/146845 -> ciflow/trunk/146845 2025-03-17T17:41:36.3862188Z * [new tag] ciflow/trunk/146874 -> ciflow/trunk/146874 2025-03-17T17:41:36.3862874Z * [new tag] ciflow/trunk/146903 -> ciflow/trunk/146903 2025-03-17T17:41:36.3863736Z * [new tag] ciflow/trunk/146911 -> ciflow/trunk/146911 2025-03-17T17:41:36.3864357Z * [new tag] ciflow/trunk/146928 -> ciflow/trunk/146928 2025-03-17T17:41:36.3865019Z * [new tag] ciflow/trunk/147072 -> ciflow/trunk/147072 2025-03-17T17:41:36.3865648Z * [new tag] ciflow/trunk/147105 -> ciflow/trunk/147105 2025-03-17T17:41:36.3866427Z * [new tag] ciflow/trunk/147155 -> ciflow/trunk/147155 2025-03-17T17:41:36.3867056Z * [new tag] ciflow/trunk/147229 -> ciflow/trunk/147229 2025-03-17T17:41:36.3867732Z * [new tag] ciflow/trunk/147272 -> ciflow/trunk/147272 2025-03-17T17:41:36.3868383Z * [new tag] ciflow/trunk/147314 -> ciflow/trunk/147314 2025-03-17T17:41:36.3869090Z * [new tag] ciflow/trunk/147368 -> ciflow/trunk/147368 2025-03-17T17:41:36.3869793Z * [new tag] ciflow/trunk/147379 -> ciflow/trunk/147379 2025-03-17T17:41:36.3870440Z * [new tag] ciflow/trunk/147422 -> ciflow/trunk/147422 2025-03-17T17:41:36.3871389Z * [new tag] ciflow/trunk/147433 -> ciflow/trunk/147433 2025-03-17T17:41:36.3871969Z * [new tag] ciflow/trunk/147452 -> ciflow/trunk/147452 2025-03-17T17:41:36.3872621Z * [new tag] ciflow/trunk/147481 -> ciflow/trunk/147481 2025-03-17T17:41:36.3873341Z * [new tag] ciflow/trunk/147498 -> ciflow/trunk/147498 2025-03-17T17:41:36.3873931Z * [new tag] ciflow/trunk/147507 -> ciflow/trunk/147507 2025-03-17T17:41:36.3874593Z * [new tag] ciflow/trunk/147583 -> ciflow/trunk/147583 2025-03-17T17:41:36.3875374Z * [new tag] ciflow/trunk/147593 -> ciflow/trunk/147593 2025-03-17T17:41:36.3876016Z * [new tag] ciflow/trunk/147599 -> ciflow/trunk/147599 2025-03-17T17:41:36.3876692Z * [new tag] ciflow/trunk/147656 -> ciflow/trunk/147656 2025-03-17T17:41:36.3877355Z * [new tag] ciflow/trunk/147664 -> ciflow/trunk/147664 2025-03-17T17:41:36.3878103Z * [new tag] ciflow/trunk/147670 -> ciflow/trunk/147670 2025-03-17T17:41:36.3878763Z * [new tag] ciflow/trunk/147752 -> ciflow/trunk/147752 2025-03-17T17:41:36.3879417Z * [new tag] ciflow/trunk/147797 -> ciflow/trunk/147797 2025-03-17T17:41:36.3880360Z * [new tag] ciflow/trunk/147808 -> ciflow/trunk/147808 2025-03-17T17:41:36.3881392Z * [new tag] ciflow/trunk/147820 -> ciflow/trunk/147820 2025-03-17T17:41:36.3882038Z * [new tag] ciflow/trunk/147821 -> ciflow/trunk/147821 2025-03-17T17:41:36.3882704Z * [new tag] ciflow/trunk/147870 -> ciflow/trunk/147870 2025-03-17T17:41:36.3883378Z * [new tag] ciflow/trunk/147881 -> ciflow/trunk/147881 2025-03-17T17:41:36.3883996Z * [new tag] ciflow/trunk/147897 -> ciflow/trunk/147897 2025-03-17T17:41:36.3884687Z * [new tag] ciflow/trunk/147910 -> ciflow/trunk/147910 2025-03-17T17:41:36.3885398Z * [new tag] ciflow/trunk/147917 -> ciflow/trunk/147917 2025-03-17T17:41:36.3885978Z * [new tag] ciflow/trunk/147962 -> ciflow/trunk/147962 2025-03-17T17:41:36.3886621Z * [new tag] ciflow/trunk/148024 -> ciflow/trunk/148024 2025-03-17T17:41:36.3887259Z * [new tag] ciflow/trunk/148130 -> ciflow/trunk/148130 2025-03-17T17:41:36.3887932Z * [new tag] ciflow/trunk/148131 -> ciflow/trunk/148131 2025-03-17T17:41:36.3888882Z * [new tag] ciflow/trunk/148140 -> ciflow/trunk/148140 2025-03-17T17:41:36.3889407Z * [new tag] ciflow/trunk/148163 -> ciflow/trunk/148163 2025-03-17T17:41:36.3890082Z * [new tag] ciflow/trunk/148173 -> ciflow/trunk/148173 2025-03-17T17:41:36.3890736Z * [new tag] ciflow/trunk/148180 -> ciflow/trunk/148180 2025-03-17T17:41:36.3891658Z * [new tag] ciflow/trunk/148281 -> ciflow/trunk/148281 2025-03-17T17:41:36.3892801Z * [new tag] ciflow/trunk/148327 -> ciflow/trunk/148327 2025-03-17T17:41:36.3893659Z * [new tag] ciflow/trunk/148360 -> ciflow/trunk/148360 2025-03-17T17:41:36.3894360Z * [new tag] ciflow/trunk/148455 -> ciflow/trunk/148455 2025-03-17T17:41:36.3894987Z * [new tag] ciflow/trunk/148492 -> ciflow/trunk/148492 2025-03-17T17:41:36.3895666Z * [new tag] ciflow/trunk/148502 -> ciflow/trunk/148502 2025-03-17T17:41:36.3896341Z * [new tag] ciflow/trunk/148503 -> ciflow/trunk/148503 2025-03-17T17:41:36.3896980Z * [new tag] ciflow/trunk/148517 -> ciflow/trunk/148517 2025-03-17T17:41:36.3897649Z * [new tag] ciflow/trunk/148554 -> ciflow/trunk/148554 2025-03-17T17:41:36.3898533Z * [new tag] ciflow/trunk/148603 -> ciflow/trunk/148603 2025-03-17T17:41:36.3899372Z * [new tag] ciflow/trunk/148611 -> ciflow/trunk/148611 2025-03-17T17:41:36.3899987Z * [new tag] ciflow/trunk/148622 -> ciflow/trunk/148622 2025-03-17T17:41:36.3900731Z * [new tag] ciflow/trunk/148646 -> ciflow/trunk/148646 2025-03-17T17:41:36.3901385Z * [new tag] ciflow/trunk/148684 -> ciflow/trunk/148684 2025-03-17T17:41:36.3902036Z * [new tag] ciflow/trunk/148704 -> ciflow/trunk/148704 2025-03-17T17:41:36.3902695Z * [new tag] ciflow/trunk/148708 -> ciflow/trunk/148708 2025-03-17T17:41:36.3903353Z * [new tag] ciflow/trunk/148736 -> ciflow/trunk/148736 2025-03-17T17:41:36.3904074Z * [new tag] ciflow/trunk/148772 -> ciflow/trunk/148772 2025-03-17T17:41:36.3904719Z * [new tag] ciflow/trunk/148773 -> ciflow/trunk/148773 2025-03-17T17:41:36.3905403Z * [new tag] ciflow/trunk/148834 -> ciflow/trunk/148834 2025-03-17T17:41:36.3906045Z * [new tag] ciflow/trunk/148864 -> ciflow/trunk/148864 2025-03-17T17:41:36.3907034Z * [new tag] ciflow/trunk/148875 -> ciflow/trunk/148875 2025-03-17T17:41:36.3907600Z * [new tag] ciflow/trunk/148878 -> ciflow/trunk/148878 2025-03-17T17:41:36.3908276Z * [new tag] ciflow/trunk/148880 -> ciflow/trunk/148880 2025-03-17T17:41:36.3908921Z * [new tag] ciflow/trunk/148890 -> ciflow/trunk/148890 2025-03-17T17:41:36.3909617Z * [new tag] ciflow/trunk/148900 -> ciflow/trunk/148900 2025-03-17T17:41:36.3910284Z * [new tag] ciflow/trunk/148903 -> ciflow/trunk/148903 2025-03-17T17:41:36.3911274Z * [new tag] ciflow/trunk/148906 -> ciflow/trunk/148906 2025-03-17T17:41:36.3911949Z * [new tag] ciflow/trunk/148919 -> ciflow/trunk/148919 2025-03-17T17:41:36.3912600Z * [new tag] ciflow/trunk/148922 -> ciflow/trunk/148922 2025-03-17T17:41:36.3913509Z * [new tag] ciflow/trunk/148936 -> ciflow/trunk/148936 2025-03-17T17:41:36.3914088Z * [new tag] ciflow/trunk/149031 -> ciflow/trunk/149031 2025-03-17T17:41:36.3914736Z * [new tag] ciflow/trunk/149041 -> ciflow/trunk/149041 2025-03-17T17:41:36.3915707Z * [new tag] ciflow/trunk/149053 -> ciflow/trunk/149053 2025-03-17T17:41:36.3916294Z * [new tag] ciflow/trunk/149054 -> ciflow/trunk/149054 2025-03-17T17:41:36.3917000Z * [new tag] ciflow/trunk/149057 -> ciflow/trunk/149057 2025-03-17T17:41:36.3917817Z * [new tag] ciflow/trunk/149113 -> ciflow/trunk/149113 2025-03-17T17:41:36.3918688Z * [new tag] ciflow/trunk/149114 -> ciflow/trunk/149114 2025-03-17T17:41:36.3919257Z * [new tag] ciflow/trunk/149136 -> ciflow/trunk/149136 2025-03-17T17:41:36.3919961Z * [new tag] ciflow/trunk/149148 -> ciflow/trunk/149148 2025-03-17T17:41:36.3920584Z * [new tag] ciflow/trunk/149172 -> ciflow/trunk/149172 2025-03-17T17:41:36.3921263Z * [new tag] ciflow/trunk/149185 -> ciflow/trunk/149185 2025-03-17T17:41:36.3921902Z * [new tag] ciflow/trunk/149192 -> ciflow/trunk/149192 2025-03-17T17:41:36.3922803Z * [new tag] ciflow/trunk/149213 -> ciflow/trunk/149213 2025-03-17T17:41:36.3923417Z * [new tag] ciflow/trunk/149231 -> ciflow/trunk/149231 2025-03-17T17:41:36.3924174Z * [new tag] ciflow/trunk/149232 -> ciflow/trunk/149232 2025-03-17T17:41:36.3924855Z * [new tag] ciflow/trunk/149237 -> ciflow/trunk/149237 2025-03-17T17:41:36.3925525Z * [new tag] ciflow/trunk/149239 -> ciflow/trunk/149239 2025-03-17T17:41:36.3926190Z * [new tag] ciflow/trunk/149240 -> ciflow/trunk/149240 2025-03-17T17:41:36.3927387Z * [new tag] ciflow/trunk/149243 -> ciflow/trunk/149243 2025-03-17T17:41:36.3928008Z * [new tag] ciflow/trunk/149283 -> ciflow/trunk/149283 2025-03-17T17:41:36.3928695Z * [new tag] ciflow/trunk/149294 -> ciflow/trunk/149294 2025-03-17T17:41:36.3929336Z * [new tag] ciflow/trunk/149295 -> ciflow/trunk/149295 2025-03-17T17:41:36.3929986Z * [new tag] ciflow/trunk/149296 -> ciflow/trunk/149296 2025-03-17T17:41:36.3930655Z * [new tag] ciflow/trunk/149297 -> ciflow/trunk/149297 2025-03-17T17:41:36.3931635Z * [new tag] ciflow/trunk/149320 -> ciflow/trunk/149320 2025-03-17T17:41:36.3932329Z * [new tag] ciflow/trunk/149322 -> ciflow/trunk/149322 2025-03-17T17:41:36.3933277Z * [new tag] ciflow/trunk/149328 -> ciflow/trunk/149328 2025-03-17T17:41:36.3933988Z * [new tag] ciflow/trunk/149330 -> ciflow/trunk/149330 2025-03-17T17:41:36.3934912Z * [new tag] ciflow/trunk/70978 -> ciflow/trunk/70978 2025-03-17T17:41:36.3935572Z * [new tag] ciflow/trunk/70979 -> ciflow/trunk/70979 2025-03-17T17:41:36.3936634Z * [new tag] ciflow/unstable/123 -> ciflow/unstable/123 2025-03-17T17:41:36.3937493Z * [new tag] ciflow/unstable/146104 -> ciflow/unstable/146104 2025-03-17T17:41:36.3938161Z * [new tag] ciflow/unstable/146264 -> ciflow/unstable/146264 2025-03-17T17:41:36.3939217Z * [new tag] ciflow/win-arm64/148753 -> ciflow/win-arm64/148753 2025-03-17T17:41:36.3939882Z * [new tag] ciflow/xpu/137566 -> ciflow/xpu/137566 2025-03-17T17:41:36.3940489Z * [new tag] ciflow/xpu/138996 -> ciflow/xpu/138996 2025-03-17T17:41:36.3941121Z * [new tag] ciflow/xpu/139469 -> ciflow/xpu/139469 2025-03-17T17:41:36.3941723Z * [new tag] ciflow/xpu/139971 -> ciflow/xpu/139971 2025-03-17T17:41:36.3942334Z * [new tag] ciflow/xpu/140365 -> ciflow/xpu/140365 2025-03-17T17:41:36.3942965Z * [new tag] ciflow/xpu/140372 -> ciflow/xpu/140372 2025-03-17T17:41:36.3943559Z * [new tag] ciflow/xpu/142097 -> ciflow/xpu/142097 2025-03-17T17:41:36.3944194Z * [new tag] ciflow/xpu/143597 -> ciflow/xpu/143597 2025-03-17T17:41:36.3944803Z * [new tag] ciflow/xpu/143833 -> ciflow/xpu/143833 2025-03-17T17:41:36.3945493Z * [new tag] ciflow/xpu/144452 -> ciflow/xpu/144452 2025-03-17T17:41:36.3946093Z * [new tag] ciflow/xpu/144664 -> ciflow/xpu/144664 2025-03-17T17:41:36.3947267Z * [new tag] ciflow/xpu/147355 -> ciflow/xpu/147355 2025-03-17T17:41:36.3948110Z * [new tag] ciflow/xpu/147498 -> ciflow/xpu/147498 2025-03-17T17:41:36.3948838Z * [new tag] ciflow/xpu/147507 -> ciflow/xpu/147507 2025-03-17T17:41:36.3949484Z * [new tag] ciflow/xpu/147583 -> ciflow/xpu/147583 2025-03-17T17:41:36.3950139Z * [new tag] ciflow/xpu/147664 -> ciflow/xpu/147664 2025-03-17T17:41:36.3950788Z * [new tag] ciflow/xpu/147821 -> ciflow/xpu/147821 2025-03-17T17:41:36.3951410Z * [new tag] ciflow/xpu/147962 -> ciflow/xpu/147962 2025-03-17T17:41:36.3952065Z * [new tag] ciflow/xpu/148360 -> ciflow/xpu/148360 2025-03-17T17:41:36.3955378Z * [new tag] ciflow/xpu/148646 -> ciflow/xpu/148646 2025-03-17T17:41:36.3955580Z * [new tag] ciflow/xpu/148864 -> ciflow/xpu/148864 2025-03-17T17:41:36.3955749Z * [new tag] ciflow/xpu/148880 -> ciflow/xpu/148880 2025-03-17T17:41:36.3956027Z * [new tag] ciflow/xpu/149053 -> ciflow/xpu/149053 2025-03-17T17:41:36.3956206Z * [new tag] ciflow/xpu/149113 -> ciflow/xpu/149113 2025-03-17T17:41:36.3956555Z * [new tag] ciflow/xpu/149114 -> ciflow/xpu/149114 2025-03-17T17:41:36.3957126Z * [new tag] cslpull75 -> cslpull75 2025-03-17T17:41:36.3957955Z * [new tag] cslpull76 -> cslpull76 2025-03-17T17:41:36.3958609Z * [new tag] cslpull77 -> cslpull77 2025-03-17T17:41:36.3959431Z * [new tag] cslpull78 -> cslpull78 2025-03-17T17:41:36.3960478Z * [new tag] cslpull79 -> cslpull79 2025-03-17T17:41:36.3961470Z * [new tag] cslpull80 -> cslpull80 2025-03-17T17:41:36.3962154Z * [new tag] cslpull81 -> cslpull81 2025-03-17T17:41:36.3963027Z * [new tag] cslpull82 -> cslpull82 2025-03-17T17:41:36.3963706Z * [new tag] cslpull83 -> cslpull83 2025-03-17T17:41:36.3964541Z * [new tag] cslpull84 -> cslpull84 2025-03-17T17:41:36.3965206Z * [new tag] cslpull85 -> cslpull85 2025-03-17T17:41:36.3966079Z * [new tag] cslpull86 -> cslpull86 2025-03-17T17:41:36.3966793Z * [new tag] cslpull87 -> cslpull87 2025-03-17T17:41:36.3967666Z * [new tag] cslpull88 -> cslpull88 2025-03-17T17:41:36.3968385Z * [new tag] cslpull89 -> cslpull89 2025-03-17T17:41:36.3968996Z * [new tag] cslpull90 -> cslpull90 2025-03-17T17:41:36.3970200Z * [new tag] cslpull91 -> cslpull91 2025-03-17T17:41:36.3970861Z * [new tag] cslpull92 -> cslpull92 2025-03-17T17:41:36.3971733Z * [new tag] flight_5 -> flight_5 2025-03-17T17:41:36.3972524Z * [new tag] flight_5.1 -> flight_5.1 2025-03-17T17:41:36.3973372Z * [new tag] flight_5.2 -> flight_5.2 2025-03-17T17:41:36.3973973Z * [new tag] flight_5.3 -> flight_5.3 2025-03-17T17:41:36.3974811Z * [new tag] forpull1 -> forpull1 2025-03-17T17:41:36.3975788Z * [new tag] malfet/tag-2ef5611 -> malfet/tag-2ef5611 2025-03-17T17:41:36.3976465Z * [new tag] malfet/tag-317b1a0 -> malfet/tag-317b1a0 2025-03-17T17:41:36.3977340Z * [new tag] malfet/tag-ec6f767 -> malfet/tag-ec6f767 2025-03-17T17:41:36.3978066Z * [new tag] nightly-binary -> nightly-binary 2025-03-17T17:41:36.3978713Z * [new tag] sqzhang_flight4_plus -> sqzhang_flight4_plus 2025-03-17T17:41:36.3979608Z * [new tag] sqzhang_flight_3 -> sqzhang_flight_3 2025-03-17T17:41:36.3980267Z * [new tag] v0.1.1 -> v0.1.1 2025-03-17T17:41:36.3981502Z * [new tag] v0.1.10 -> v0.1.10 2025-03-17T17:41:36.3982193Z * [new tag] v0.1.11 -> v0.1.11 2025-03-17T17:41:36.3983009Z * [new tag] v0.1.12 -> v0.1.12 2025-03-17T17:41:36.3983664Z * [new tag] v0.1.2 -> v0.1.2 2025-03-17T17:41:36.3984351Z * [new tag] v0.1.3 -> v0.1.3 2025-03-17T17:41:36.3985373Z * [new tag] v0.1.4 -> v0.1.4 2025-03-17T17:41:36.3986027Z * [new tag] v0.1.5 -> v0.1.5 2025-03-17T17:41:36.3987019Z * [new tag] v0.1.6 -> v0.1.6 2025-03-17T17:41:36.3987605Z * [new tag] v0.1.7 -> v0.1.7 2025-03-17T17:41:36.3988292Z * [new tag] v0.1.8 -> v0.1.8 2025-03-17T17:41:36.3989093Z * [new tag] v0.1.9 -> v0.1.9 2025-03-17T17:41:36.3989747Z * [new tag] v0.2.0 -> v0.2.0 2025-03-17T17:41:36.3990621Z * [new tag] v0.3.0 -> v0.3.0 2025-03-17T17:41:36.3991476Z * [new tag] v0.3.1 -> v0.3.1 2025-03-17T17:41:36.3992283Z * [new tag] v0.4.0 -> v0.4.0 2025-03-17T17:41:36.3992958Z * [new tag] v0.4.1 -> v0.4.1 2025-03-17T17:41:36.3993755Z * [new tag] v1.0.0 -> v1.0.0 2025-03-17T17:41:36.3994607Z * [new tag] v1.0.0a0 -> v1.0.0a0 2025-03-17T17:41:36.3995278Z * [new tag] v1.0.1 -> v1.0.1 2025-03-17T17:41:36.3996142Z * [new tag] v1.0rc0 -> v1.0rc0 2025-03-17T17:41:36.3996621Z * [new tag] v1.0rc1 -> v1.0rc1 2025-03-17T17:41:36.3997503Z * [new tag] v1.1.0 -> v1.1.0 2025-03-17T17:41:36.3998296Z * [new tag] v1.1.0a0 -> v1.1.0a0 2025-03-17T17:41:36.3999281Z * [new tag] v1.10.0 -> v1.10.0 2025-03-17T17:41:36.4000142Z * [new tag] v1.10.0-rc1 -> v1.10.0-rc1 2025-03-17T17:41:36.4000954Z * [new tag] v1.10.0-rc2 -> v1.10.0-rc2 2025-03-17T17:41:36.4001548Z * [new tag] v1.10.0-rc3 -> v1.10.0-rc3 2025-03-17T17:41:36.4002441Z * [new tag] v1.10.1 -> v1.10.1 2025-03-17T17:41:36.4002975Z * [new tag] v1.10.1-rc1 -> v1.10.1-rc1 2025-03-17T17:41:36.4003582Z * [new tag] v1.10.2 -> v1.10.2 2025-03-17T17:41:36.4004182Z * [new tag] v1.10.2-rc1 -> v1.10.2-rc1 2025-03-17T17:41:36.4005068Z * [new tag] v1.11.0 -> v1.11.0 2025-03-17T17:41:36.4005909Z * [new tag] v1.11.0-rc1 -> v1.11.0-rc1 2025-03-17T17:41:36.4006815Z * [new tag] v1.11.0-rc2 -> v1.11.0-rc2 2025-03-17T17:41:36.4007653Z * [new tag] v1.11.0-rc3 -> v1.11.0-rc3 2025-03-17T17:41:36.4008493Z * [new tag] v1.11.0-rc4 -> v1.11.0-rc4 2025-03-17T17:41:36.4009355Z * [new tag] v1.11.0-rc5 -> v1.11.0-rc5 2025-03-17T17:41:36.4009968Z * [new tag] v1.11.0-rc6 -> v1.11.0-rc6 2025-03-17T17:41:36.4010593Z * [new tag] v1.11.0-rc7 -> v1.11.0-rc7 2025-03-17T17:41:36.4011457Z * [new tag] v1.12.0 -> v1.12.0 2025-03-17T17:41:36.4012154Z * [new tag] v1.12.0-rc1 -> v1.12.0-rc1 2025-03-17T17:41:36.4013024Z * [new tag] v1.12.0-rc2 -> v1.12.0-rc2 2025-03-17T17:41:36.4013915Z * [new tag] v1.12.0-rc3 -> v1.12.0-rc3 2025-03-17T17:41:36.4014737Z * [new tag] v1.12.0-rc4 -> v1.12.0-rc4 2025-03-17T17:41:36.4015423Z * [new tag] v1.12.0-rc5 -> v1.12.0-rc5 2025-03-17T17:41:36.4016414Z * [new tag] v1.12.0-rc6 -> v1.12.0-rc6 2025-03-17T17:41:36.4016915Z * [new tag] v1.12.0-rc7 -> v1.12.0-rc7 2025-03-17T17:41:36.4017537Z * [new tag] v1.12.0-rc8 -> v1.12.0-rc8 2025-03-17T17:41:36.4018128Z * [new tag] v1.12.1 -> v1.12.1 2025-03-17T17:41:36.4019273Z * [new tag] v1.12.1-rc1 -> v1.12.1-rc1 2025-03-17T17:41:36.4019900Z * [new tag] v1.12.1-rc2 -> v1.12.1-rc2 2025-03-17T17:41:36.4020854Z * [new tag] v1.12.1-rc3 -> v1.12.1-rc3 2025-03-17T17:41:36.4021702Z * [new tag] v1.12.1-rc4 -> v1.12.1-rc4 2025-03-17T17:41:36.4022296Z * [new tag] v1.12.1-rc5 -> v1.12.1-rc5 2025-03-17T17:41:36.4023167Z * [new tag] v1.13.0 -> v1.13.0 2025-03-17T17:41:36.4023846Z * [new tag] v1.13.0-rc1 -> v1.13.0-rc1 2025-03-17T17:41:36.4024745Z * [new tag] v1.13.0-rc2 -> v1.13.0-rc2 2025-03-17T17:41:36.4025626Z * [new tag] v1.13.0-rc3 -> v1.13.0-rc3 2025-03-17T17:41:36.4026692Z * [new tag] v1.13.0-rc4 -> v1.13.0-rc4 2025-03-17T17:41:36.4027156Z * [new tag] v1.13.0-rc5 -> v1.13.0-rc5 2025-03-17T17:41:36.4027792Z * [new tag] v1.13.0-rc6 -> v1.13.0-rc6 2025-03-17T17:41:36.4028734Z * [new tag] v1.13.1 -> v1.13.1 2025-03-17T17:41:36.4029253Z * [new tag] v1.13.1-rc1 -> v1.13.1-rc1 2025-03-17T17:41:36.4030130Z * [new tag] v1.2.0 -> v1.2.0 2025-03-17T17:41:36.4031309Z * [new tag] v1.2.0a0 -> v1.2.0a0 2025-03-17T17:41:36.4032191Z * [new tag] v1.3.0 -> v1.3.0 2025-03-17T17:41:36.4032816Z * [new tag] v1.3.0a0 -> v1.3.0a0 2025-03-17T17:41:36.4033454Z * [new tag] v1.3.1 -> v1.3.1 2025-03-17T17:41:36.4034264Z * [new tag] v1.4.0 -> v1.4.0 2025-03-17T17:41:36.4035207Z * [new tag] v1.4.0a0 -> v1.4.0a0 2025-03-17T17:41:36.4035808Z * [new tag] v1.4.1 -> v1.4.1 2025-03-17T17:41:36.4036670Z * [new tag] v1.5.0 -> v1.5.0 2025-03-17T17:41:36.4041144Z * [new tag] v1.5.0-rc1 -> v1.5.0-rc1 2025-03-17T17:41:36.4042092Z * [new tag] v1.5.0-rc2 -> v1.5.0-rc2 2025-03-17T17:41:36.4043049Z * [new tag] v1.5.0-rc3 -> v1.5.0-rc3 2025-03-17T17:41:36.4043712Z * [new tag] v1.5.0-rc4 -> v1.5.0-rc4 2025-03-17T17:41:36.4044313Z * [new tag] v1.5.0-rc5 -> v1.5.0-rc5 2025-03-17T17:41:36.4045255Z * [new tag] v1.5.1 -> v1.5.1 2025-03-17T17:41:36.4045744Z * [new tag] v1.5.1-rc1 -> v1.5.1-rc1 2025-03-17T17:41:36.4046397Z * [new tag] v1.6.0 -> v1.6.0 2025-03-17T17:41:36.4047306Z * [new tag] v1.6.0-rc1 -> v1.6.0-rc1 2025-03-17T17:41:36.4048190Z * [new tag] v1.6.0-rc2 -> v1.6.0-rc2 2025-03-17T17:41:36.4049035Z * [new tag] v1.6.0-rc3 -> v1.6.0-rc3 2025-03-17T17:41:36.4049762Z * [new tag] v1.6.0-rc4 -> v1.6.0-rc4 2025-03-17T17:41:36.4050627Z * [new tag] v1.6.0-rc5 -> v1.6.0-rc5 2025-03-17T17:41:36.4051432Z * [new tag] v1.6.0-rc6 -> v1.6.0-rc6 2025-03-17T17:41:36.4051946Z * [new tag] v1.6.0-rc7 -> v1.6.0-rc7 2025-03-17T17:41:36.4052863Z * [new tag] v1.7.0 -> v1.7.0 2025-03-17T17:41:36.4053711Z * [new tag] v1.7.0-rc1 -> v1.7.0-rc1 2025-03-17T17:41:36.4054615Z * [new tag] v1.7.0-rc2 -> v1.7.0-rc2 2025-03-17T17:41:36.4055448Z * [new tag] v1.7.0-rc3 -> v1.7.0-rc3 2025-03-17T17:41:36.4055920Z * [new tag] v1.7.0-rc4 -> v1.7.0-rc4 2025-03-17T17:41:36.4056803Z * [new tag] v1.7.1 -> v1.7.1 2025-03-17T17:41:36.4057714Z * [new tag] v1.7.1-rc1 -> v1.7.1-rc1 2025-03-17T17:41:36.4058568Z * [new tag] v1.7.1-rc2 -> v1.7.1-rc2 2025-03-17T17:41:36.4059069Z * [new tag] v1.7.1-rc3 -> v1.7.1-rc3 2025-03-17T17:41:36.4059982Z * [new tag] v1.8.0 -> v1.8.0 2025-03-17T17:41:36.4060619Z * [new tag] v1.8.0-rc1 -> v1.8.0-rc1 2025-03-17T17:41:36.4061476Z * [new tag] v1.8.0-rc2 -> v1.8.0-rc2 2025-03-17T17:41:36.4062169Z * [new tag] v1.8.0-rc3 -> v1.8.0-rc3 2025-03-17T17:41:36.4063071Z * [new tag] v1.8.0-rc4 -> v1.8.0-rc4 2025-03-17T17:41:36.4063704Z * [new tag] v1.8.0-rc5 -> v1.8.0-rc5 2025-03-17T17:41:36.4064321Z * [new tag] v1.8.1 -> v1.8.1 2025-03-17T17:41:36.4065200Z * [new tag] v1.8.1-rc1 -> v1.8.1-rc1 2025-03-17T17:41:36.4065687Z * [new tag] v1.8.1-rc2 -> v1.8.1-rc2 2025-03-17T17:41:36.4066402Z * [new tag] v1.8.1-rc3 -> v1.8.1-rc3 2025-03-17T17:41:36.4067810Z * [new tag] v1.8.2 -> v1.8.2 2025-03-17T17:41:36.4068403Z * [new tag] v1.8.2-rc1 -> v1.8.2-rc1 2025-03-17T17:41:36.4069280Z * [new tag] v1.9.0 -> v1.9.0 2025-03-17T17:41:36.4070120Z * [new tag] v1.9.0-rc1 -> v1.9.0-rc1 2025-03-17T17:41:36.4070971Z * [new tag] v1.9.0-rc2 -> v1.9.0-rc2 2025-03-17T17:41:36.4071789Z * [new tag] v1.9.0-rc3 -> v1.9.0-rc3 2025-03-17T17:41:36.4072425Z * [new tag] v1.9.0-rc4 -> v1.9.0-rc4 2025-03-17T17:41:36.4073343Z * [new tag] v1.9.1 -> v1.9.1 2025-03-17T17:41:36.4074333Z * [new tag] v1.9.1-rc1 -> v1.9.1-rc1 2025-03-17T17:41:36.4074917Z * [new tag] v1.9.1-rc2 -> v1.9.1-rc2 2025-03-17T17:41:36.4075785Z * [new tag] v2.0.0 -> v2.0.0 2025-03-17T17:41:36.4076607Z * [new tag] v2.0.0-rc1 -> v2.0.0-rc1 2025-03-17T17:41:36.4077443Z * [new tag] v2.0.0-rc2 -> v2.0.0-rc2 2025-03-17T17:41:36.4078320Z * [new tag] v2.0.0-rc3 -> v2.0.0-rc3 2025-03-17T17:41:36.4079010Z * [new tag] v2.0.0-rc4 -> v2.0.0-rc4 2025-03-17T17:41:36.4079893Z * [new tag] v2.0.0-rc5 -> v2.0.0-rc5 2025-03-17T17:41:36.4080451Z * [new tag] v2.0.0-rc6 -> v2.0.0-rc6 2025-03-17T17:41:36.4081348Z * [new tag] v2.0.1 -> v2.0.1 2025-03-17T17:41:36.4082207Z * [new tag] v2.0.1-rc1 -> v2.0.1-rc1 2025-03-17T17:41:36.4082784Z * [new tag] v2.0.1-rc2 -> v2.0.1-rc2 2025-03-17T17:41:36.4083498Z * [new tag] v2.0.1-rc3 -> v2.0.1-rc3 2025-03-17T17:41:36.4084095Z * [new tag] v2.0.1-rc4 -> v2.0.1-rc4 2025-03-17T17:41:36.4085804Z * [new tag] v2.1.0 -> v2.1.0 2025-03-17T17:41:36.4086650Z * [new tag] v2.1.0-rc1 -> v2.1.0-rc1 2025-03-17T17:41:36.4087475Z * [new tag] v2.1.0-rc2 -> v2.1.0-rc2 2025-03-17T17:41:36.4088322Z * [new tag] v2.1.0-rc3 -> v2.1.0-rc3 2025-03-17T17:41:36.4089229Z * [new tag] v2.1.0-rc4 -> v2.1.0-rc4 2025-03-17T17:41:36.4090075Z * [new tag] v2.1.0-rc5 -> v2.1.0-rc5 2025-03-17T17:41:36.4090635Z * [new tag] v2.1.0-rc6 -> v2.1.0-rc6 2025-03-17T17:41:36.4091444Z * [new tag] v2.1.1 -> v2.1.1 2025-03-17T17:41:36.4092238Z * [new tag] v2.1.1-rc1 -> v2.1.1-rc1 2025-03-17T17:41:36.4093053Z * [new tag] v2.1.1-rc2 -> v2.1.1-rc2 2025-03-17T17:41:36.4093905Z * [new tag] v2.1.1-rc3 -> v2.1.1-rc3 2025-03-17T17:41:36.4094724Z * [new tag] v2.1.1-rc4 -> v2.1.1-rc4 2025-03-17T17:41:36.4095420Z * [new tag] v2.1.1-rc5 -> v2.1.1-rc5 2025-03-17T17:41:36.4096039Z * [new tag] v2.1.1-rc6 -> v2.1.1-rc6 2025-03-17T17:41:36.4096852Z * [new tag] v2.1.2 -> v2.1.2 2025-03-17T17:41:36.4097690Z * [new tag] v2.1.2-rc1 -> v2.1.2-rc1 2025-03-17T17:41:36.4098387Z * [new tag] v2.1.2-rc2 -> v2.1.2-rc2 2025-03-17T17:41:36.4099034Z * [new tag] v2.1.2-rc3 -> v2.1.2-rc3 2025-03-17T17:41:36.4099962Z * [new tag] v2.2.0 -> v2.2.0 2025-03-17T17:41:36.4100656Z * [new tag] v2.2.0-rc1 -> v2.2.0-rc1 2025-03-17T17:41:36.4101500Z * [new tag] v2.2.0-rc2 -> v2.2.0-rc2 2025-03-17T17:41:36.4102176Z * [new tag] v2.2.0-rc3 -> v2.2.0-rc3 2025-03-17T17:41:36.4103031Z * [new tag] v2.2.0-rc4 -> v2.2.0-rc4 2025-03-17T17:41:36.4103851Z * [new tag] v2.2.0-rc5 -> v2.2.0-rc5 2025-03-17T17:41:36.4104566Z * [new tag] v2.2.0-rc6 -> v2.2.0-rc6 2025-03-17T17:41:36.4105148Z * [new tag] v2.2.0-rc7 -> v2.2.0-rc7 2025-03-17T17:41:36.4105807Z * [new tag] v2.2.0-rc8 -> v2.2.0-rc8 2025-03-17T17:41:36.4106759Z * [new tag] v2.2.1 -> v2.2.1 2025-03-17T17:41:36.4107723Z * [new tag] v2.2.1-rc1 -> v2.2.1-rc1 2025-03-17T17:41:36.4108220Z * [new tag] v2.2.1-rc2 -> v2.2.1-rc2 2025-03-17T17:41:36.4108867Z * [new tag] v2.2.1-rc3 -> v2.2.1-rc3 2025-03-17T17:41:36.4109479Z * [new tag] v2.2.2 -> v2.2.2 2025-03-17T17:41:36.4110426Z * [new tag] v2.2.2-rc1 -> v2.2.2-rc1 2025-03-17T17:41:36.4111045Z * [new tag] v2.2.2-rc2 -> v2.2.2-rc2 2025-03-17T17:41:36.4111608Z * [new tag] v2.2.2-rc3 -> v2.2.2-rc3 2025-03-17T17:41:36.4112499Z * [new tag] v2.3.0 -> v2.3.0 2025-03-17T17:41:36.4113344Z * [new tag] v2.3.0-rc1 -> v2.3.0-rc1 2025-03-17T17:41:36.4114282Z * [new tag] v2.3.0-rc10 -> v2.3.0-rc10 2025-03-17T17:41:36.4115200Z * [new tag] v2.3.0-rc11 -> v2.3.0-rc11 2025-03-17T17:41:36.4115686Z * [new tag] v2.3.0-rc12 -> v2.3.0-rc12 2025-03-17T17:41:36.4116599Z * [new tag] v2.3.0-rc2 -> v2.3.0-rc2 2025-03-17T17:41:36.4117450Z * [new tag] v2.3.0-rc3 -> v2.3.0-rc3 2025-03-17T17:41:36.4118164Z * [new tag] v2.3.0-rc4 -> v2.3.0-rc4 2025-03-17T17:41:36.4119061Z * [new tag] v2.3.0-rc5 -> v2.3.0-rc5 2025-03-17T17:41:36.4119587Z * [new tag] v2.3.0-rc6 -> v2.3.0-rc6 2025-03-17T17:41:36.4120553Z * [new tag] v2.3.0-rc7 -> v2.3.0-rc7 2025-03-17T17:41:36.4121230Z * [new tag] v2.3.0-rc8 -> v2.3.0-rc8 2025-03-17T17:41:36.4121840Z * [new tag] v2.3.0-rc9 -> v2.3.0-rc9 2025-03-17T17:41:36.4122442Z * [new tag] v2.3.1 -> v2.3.1 2025-03-17T17:41:36.4123382Z * [new tag] v2.3.1-rc1 -> v2.3.1-rc1 2025-03-17T17:41:36.4124199Z * [new tag] v2.3.1-rc2 -> v2.3.1-rc2 2025-03-17T17:41:36.4125023Z * [new tag] v2.3.1-rc3 -> v2.3.1-rc3 2025-03-17T17:41:36.4125883Z * [new tag] v2.4.0 -> v2.4.0 2025-03-17T17:41:36.4126720Z * [new tag] v2.4.0-rc1 -> v2.4.0-rc1 2025-03-17T17:41:36.4127431Z * [new tag] v2.4.0-rc2 -> v2.4.0-rc2 2025-03-17T17:41:36.4128356Z * [new tag] v2.4.0-rc3 -> v2.4.0-rc3 2025-03-17T17:41:36.4129052Z * [new tag] v2.4.0-rc4 -> v2.4.0-rc4 2025-03-17T17:41:36.4130042Z * [new tag] v2.4.0-rc5 -> v2.4.0-rc5 2025-03-17T17:41:36.4130844Z * [new tag] v2.4.0-rc6 -> v2.4.0-rc6 2025-03-17T17:41:36.4131689Z * [new tag] v2.4.0-rc7 -> v2.4.0-rc7 2025-03-17T17:41:36.4132528Z * [new tag] v2.4.0-rc8 -> v2.4.0-rc8 2025-03-17T17:41:36.4133380Z * [new tag] v2.4.0-rc9 -> v2.4.0-rc9 2025-03-17T17:41:36.4133889Z * [new tag] v2.4.1 -> v2.4.1 2025-03-17T17:41:36.4134911Z * [new tag] v2.4.1-rc1 -> v2.4.1-rc1 2025-03-17T17:41:36.4136203Z * [new tag] v2.4.1-rc2 -> v2.4.1-rc2 2025-03-17T17:41:36.4137207Z * [new tag] v2.4.1-rc3 -> v2.4.1-rc3 2025-03-17T17:41:36.4138152Z * [new tag] v2.5.0 -> v2.5.0 2025-03-17T17:41:36.4138871Z * [new tag] v2.5.0-rc1 -> v2.5.0-rc1 2025-03-17T17:41:36.4139510Z * [new tag] v2.5.0-rc10 -> v2.5.0-rc10 2025-03-17T17:41:36.4140549Z * [new tag] v2.5.0-rc2 -> v2.5.0-rc2 2025-03-17T17:41:36.4141440Z * [new tag] v2.5.0-rc3 -> v2.5.0-rc3 2025-03-17T17:41:36.4142283Z * [new tag] v2.5.0-rc4 -> v2.5.0-rc4 2025-03-17T17:41:36.4143135Z * [new tag] v2.5.0-rc5 -> v2.5.0-rc5 2025-03-17T17:41:36.4144131Z * [new tag] v2.5.0-rc6 -> v2.5.0-rc6 2025-03-17T17:41:36.4144955Z * [new tag] v2.5.0-rc7 -> v2.5.0-rc7 2025-03-17T17:41:36.4145830Z * [new tag] v2.5.0-rc8 -> v2.5.0-rc8 2025-03-17T17:41:36.4146720Z * [new tag] v2.5.0-rc9 -> v2.5.0-rc9 2025-03-17T17:41:36.4147305Z * [new tag] v2.5.1 -> v2.5.1 2025-03-17T17:41:36.4147917Z * [new tag] v2.5.1-rc1 -> v2.5.1-rc1 2025-03-17T17:41:36.4148540Z * [new tag] v2.6.0 -> v2.6.0 2025-03-17T17:41:36.4149517Z * [new tag] v2.6.0-rc1 -> v2.6.0-rc1 2025-03-17T17:41:36.4150459Z * [new tag] v2.6.0-rc2 -> v2.6.0-rc2 2025-03-17T17:41:36.4151324Z * [new tag] v2.6.0-rc3 -> v2.6.0-rc3 2025-03-17T17:41:36.4152046Z * [new tag] v2.6.0-rc4 -> v2.6.0-rc4 2025-03-17T17:41:36.4153158Z * [new tag] v2.6.0-rc5 -> v2.6.0-rc5 2025-03-17T17:41:36.4154117Z * [new tag] v2.6.0-rc6 -> v2.6.0-rc6 2025-03-17T17:41:36.4155075Z * [new tag] v2.6.0-rc7 -> v2.6.0-rc7 2025-03-17T17:41:36.4155904Z * [new tag] v2.6.0-rc8 -> v2.6.0-rc8 2025-03-17T17:41:36.4156743Z * [new tag] v2.6.0-rc9 -> v2.6.0-rc9 2025-03-17T17:41:36.4157555Z * [new tag] v2.7.0-rc1 -> v2.7.0-rc1 2025-03-17T17:41:36.4158243Z * [new tag] v2.7.0-rc2 -> v2.7.0-rc2 2025-03-17T17:41:36.4159123Z * [new tag] whc_flight_1 -> whc_flight_1 2025-03-17T17:41:36.4159742Z * [new tag] whc_flight_2 -> whc_flight_2 2025-03-17T17:41:36.4160381Z * [new tag] whc_flight_4 -> whc_flight_4 2025-03-17T17:41:36.4716913Z [command]/usr/bin/git rev-parse --verify --quiet 52b86900e894e6b34d880548ab6883b3d9207fb6^{object} 2025-03-17T17:41:36.4738294Z 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:41:36.4741551Z ##[endgroup] 2025-03-17T17:41:36.4741874Z ##[group]Determining the checkout info 2025-03-17T17:41:36.4742972Z ##[endgroup] 2025-03-17T17:41:36.4746671Z [command]/usr/bin/git sparse-checkout disable 2025-03-17T17:41:36.4775647Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-03-17T17:41:36.4797464Z ##[group]Checking out the ref 2025-03-17T17:41:36.4800794Z [command]/usr/bin/git checkout --progress --force 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:41:37.5057799Z Updating files: 99% (16457/16573) 2025-03-17T17:41:37.5058197Z Updating files: 100% (16573/16573) 2025-03-17T17:41:37.5058536Z Updating files: 100% (16573/16573), done. 2025-03-17T17:41:37.5281171Z Note: switching to '52b86900e894e6b34d880548ab6883b3d9207fb6'. 2025-03-17T17:41:37.5281502Z 2025-03-17T17:41:37.5281733Z You are in 'detached HEAD' state. You can look around, make experimental 2025-03-17T17:41:37.5282330Z changes and commit them, and you can discard any commits you make in this 2025-03-17T17:41:37.5283110Z state without impacting any branches by switching back to a branch. 2025-03-17T17:41:37.5283467Z 2025-03-17T17:41:37.5283689Z If you want to create a new branch to retain commits you create, you may 2025-03-17T17:41:37.5284238Z do so (now or later) by using -c with the switch command. Example: 2025-03-17T17:41:37.5284558Z 2025-03-17T17:41:37.5284682Z git switch -c 2025-03-17T17:41:37.5284907Z 2025-03-17T17:41:37.5285025Z Or undo this operation with: 2025-03-17T17:41:37.5285230Z 2025-03-17T17:41:37.5285330Z git switch - 2025-03-17T17:41:37.5285485Z 2025-03-17T17:41:37.5285737Z Turn off this advice by setting config variable advice.detachedHead to false 2025-03-17T17:41:37.5286119Z 2025-03-17T17:41:37.5286238Z HEAD is now at 52b86900e89 Update 2025-03-17T17:41:37.5338194Z ##[endgroup] 2025-03-17T17:41:37.5338661Z ##[group]Setting up auth for fetching submodules 2025-03-17T17:41:37.5344554Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-03-17T17:41:37.5387505Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-03-17T17:41:37.5411458Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-03-17T17:41:37.5434191Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-03-17T17:41:37.5455574Z ##[endgroup] 2025-03-17T17:41:37.5456008Z ##[group]Fetching submodules 2025-03-17T17:41:37.5459065Z [command]/usr/bin/git submodule sync --recursive 2025-03-17T17:41:37.5728911Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-03-17T17:41:37.5990927Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2025-03-17T17:41:37.5993280Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2025-03-17T17:41:37.5996733Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2025-03-17T17:41:37.5999199Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2025-03-17T17:41:37.6002170Z Submodule 'third_party/NVTX' (https://github.com/NVIDIA/NVTX.git) registered for path 'third_party/NVTX' 2025-03-17T17:41:37.6005715Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2025-03-17T17:41:37.6009053Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2025-03-17T17:41:37.6012132Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2025-03-17T17:41:37.6015604Z Submodule 'third_party/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/composable_kernel' 2025-03-17T17:41:37.6019121Z Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' 2025-03-17T17:41:37.6022600Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2025-03-17T17:41:37.6026392Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2025-03-17T17:41:37.6030251Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2025-03-17T17:41:37.6318403Z Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' 2025-03-17T17:41:37.6321883Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2025-03-17T17:41:37.6326313Z Submodule 'third_party/flash-attention' (https://github.com/Dao-AILab/flash-attention.git) registered for path 'third_party/flash-attention' 2025-03-17T17:41:37.6330480Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2025-03-17T17:41:37.6334524Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2025-03-17T17:41:37.6339275Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:41:37.6343624Z Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' 2025-03-17T17:41:37.6348281Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2025-03-17T17:41:37.6352780Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2025-03-17T17:41:37.6357460Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2025-03-17T17:41:37.6362280Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2025-03-17T17:41:37.6367146Z Submodule 'third_party/kleidiai' (https://github.com/ARM-software/kleidiai.git) registered for path 'third_party/kleidiai' 2025-03-17T17:41:37.6372074Z Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' 2025-03-17T17:41:37.6377053Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2025-03-17T17:41:37.6382141Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2025-03-17T17:41:37.6387725Z Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' 2025-03-17T17:41:37.6394447Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2025-03-17T17:41:37.6411373Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2025-03-17T17:41:37.6416889Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2025-03-17T17:41:37.6422499Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2025-03-17T17:41:37.6428111Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2025-03-17T17:41:37.6433886Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2025-03-17T17:41:37.6441021Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2025-03-17T17:41:37.6447003Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2025-03-17T17:41:37.6475387Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2025-03-17T17:41:37.9213214Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2025-03-17T17:41:37.9214062Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2025-03-17T17:41:37.9237728Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpp-httplib'... 2025-03-17T17:41:38.8040336Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2025-03-17T17:41:38.8042101Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NVTX'... 2025-03-17T17:41:38.8043770Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2025-03-17T17:41:38.8202055Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/composable_kernel'... 2025-03-17T17:41:43.3450937Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2025-03-17T17:41:43.3453019Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention'... 2025-03-17T17:41:43.3454881Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2025-03-17T17:41:43.3456858Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2025-03-17T17:41:43.3458658Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2025-03-17T17:41:43.3654842Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2025-03-17T17:41:44.6527223Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2025-03-17T17:41:44.6528767Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2025-03-17T17:41:44.6530193Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2025-03-17T17:41:44.6531586Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2025-03-17T17:41:44.6533045Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2025-03-17T17:41:44.6534491Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2025-03-17T17:41:44.6647930Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kleidiai'... 2025-03-17T17:41:44.9328517Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2025-03-17T17:41:46.7343363Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2025-03-17T17:41:46.7344902Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/mimalloc'... 2025-03-17T17:41:46.7346718Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2025-03-17T17:41:46.7348185Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2025-03-17T17:41:46.7376677Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2025-03-17T17:42:11.8920097Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2025-03-17T17:42:11.8920977Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2025-03-17T17:42:11.8921765Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2025-03-17T17:42:11.8922527Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2025-03-17T17:42:11.8923316Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2025-03-17T17:42:11.8924086Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/eigen'... 2025-03-17T17:42:11.8924860Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2025-03-17T17:42:11.8925605Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2025-03-17T17:42:11.8926416Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp'... 2025-03-17T17:42:11.8927230Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2025-03-17T17:42:11.9072142Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-03-17T17:42:11.9184676Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-03-17T17:42:11.9270277Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-03-17T17:42:11.9503935Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-03-17T17:42:11.9826765Z Submodule path 'third_party/NVTX': checked out 'e170594ac7cf1dac584da473d4ca9301087090c1' 2025-03-17T17:42:12.0188266Z Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' 2025-03-17T17:42:12.7046049Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-03-17T17:42:12.7273170Z Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2025-03-17T17:42:12.9629946Z Submodule path 'third_party/composable_kernel': checked out '8086bbe3a78d931eb96fe12fdc014082e18d18d3' 2025-03-17T17:42:13.0098588Z Submodule path 'third_party/cpp-httplib': checked out '3b6597bba913d51161383657829b7e644e59c006' 2025-03-17T17:42:13.1038025Z Submodule path 'third_party/cpuinfo': checked out '1e83a2fdd3102f65c6f1fb602c1b320486218a99' 2025-03-17T17:42:13.1376706Z Submodule path 'third_party/cudnn_frontend': checked out '91b7532f3386768bba4f444ee7672b497f34da8a' 2025-03-17T17:42:13.6929576Z Submodule path 'third_party/cutlass': checked out 'afa1772203677c5118fcd82537a9c8fefbcc7008' 2025-03-17T17:42:13.9383668Z Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' 2025-03-17T17:42:14.0361406Z Submodule path 'third_party/fbgemm': checked out 'dbc3157bf256f1339b3fa1fef2be89ac4078be0e' 2025-03-17T17:42:14.0377950Z Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:42:14.0380111Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:42:14.0382431Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:42:14.0384875Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:42:14.0387900Z Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:42:14.0413626Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... 2025-03-17T17:42:15.3568861Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... 2025-03-17T17:42:15.3570769Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... 2025-03-17T17:42:15.4570137Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cutlass'... 2025-03-17T17:42:16.6264867Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... 2025-03-17T17:42:16.6733140Z Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' 2025-03-17T17:42:16.7653556Z Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' 2025-03-17T17:42:17.1503033Z Submodule path 'third_party/fbgemm/third_party/cutlass': checked out 'fc9ebc645b63f3a6bc80aaefde5c063fb72110d6' 2025-03-17T17:42:17.2109332Z Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' 2025-03-17T17:42:17.2221190Z Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '23f53b025b466d8ec3c45d52290d3442f7fbe6b1' 2025-03-17T17:42:17.2869111Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-03-17T17:42:17.2885491Z Submodule 'csrc/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:42:17.2887565Z Submodule 'csrc/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:42:17.2912261Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/composable_kernel'... 2025-03-17T17:42:19.9156424Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/cutlass'... 2025-03-17T17:42:20.1732839Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-03-17T17:42:20.7142519Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-03-17T17:42:20.8428753Z Submodule path 'third_party/flatbuffers': checked out '01834de25e4bf3975a9a00e816292b1ad0fe184b' 2025-03-17T17:42:20.8748520Z Submodule path 'third_party/fmt': checked out '123913715afeb8a437e6388b4473fcc4753e1c9a' 2025-03-17T17:42:20.9127996Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-03-17T17:42:20.9377932Z Submodule path 'third_party/gloo': checked out '5354032ea08eadd7fc4456477f7f7c6308818509' 2025-03-17T17:42:20.9789977Z Submodule path 'third_party/googletest': checked out 'b514bdc898e2951020cbdca1304b75f5950d1f59' 2025-03-17T17:42:20.9908705Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-03-17T17:42:20.9923070Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2025-03-17T17:42:20.9946978Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2025-03-17T17:42:36.7757175Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-03-17T17:42:36.7932918Z Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' 2025-03-17T17:42:36.8762068Z Submodule path 'third_party/kineto': checked out 'a054a4be0db117c579a21747debf19c863631f26' 2025-03-17T17:42:36.8777496Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:42:36.8779247Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:42:36.8781789Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:42:36.8807052Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2025-03-17T17:42:37.6459019Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2025-03-17T17:42:38.7413018Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2025-03-17T17:42:38.8208382Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2025-03-17T17:42:38.8223742Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:42:38.8226195Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:42:38.8229112Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:42:38.8232169Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:42:38.8235068Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:42:38.8238919Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:42:38.8241881Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:42:38.8245128Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:42:38.8272279Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2025-03-17T17:42:40.5136348Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2025-03-17T17:42:40.5144007Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2025-03-17T17:42:40.5146845Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2025-03-17T17:42:40.5150889Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2025-03-17T17:42:40.6138094Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2025-03-17T17:42:41.0996320Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2025-03-17T17:42:41.1995712Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2025-03-17T17:42:47.7381359Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-03-17T17:42:47.7562095Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-03-17T17:42:47.7912561Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-03-17T17:42:47.8039419Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-03-17T17:42:47.8054036Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:42:47.8077030Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2025-03-17T17:42:48.1130286Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-03-17T17:42:48.1304954Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-03-17T17:42:48.1696428Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2025-03-17T17:42:48.2631125Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-03-17T17:42:48.2788279Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-03-17T17:42:48.3110793Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '0041a40c1350ba702d475b9c4ad62da77caea164' 2025-03-17T17:42:48.3663930Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2025-03-17T17:42:48.4000991Z Submodule path 'third_party/kleidiai': checked out 'ef685a13cfbe8d418aa2ed34350e21e4938358b6' 2025-03-17T17:42:48.4352267Z Submodule path 'third_party/mimalloc': checked out 'b66e3214d8a104669c2ec05ae91ebc26a8f5ab78' 2025-03-17T17:42:48.5384262Z Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' 2025-03-17T17:42:48.8829883Z Submodule path 'third_party/onnx': checked out 'b8baa8446686496da4cc8fda09f2b6fe65c2a02c' 2025-03-17T17:42:48.8863442Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2025-03-17T17:42:48.8888953Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2025-03-17T17:42:49.8164605Z Submodule path 'third_party/onnx/third_party/pybind11': checked out '3e9dfa2866941655c56877882565e7577de6fc7b' 2025-03-17T17:42:49.8819399Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-03-17T17:42:49.8836713Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:42:49.8839263Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:42:49.8841797Z Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:42:49.8844537Z Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:42:49.8847503Z Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:42:49.8850299Z Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:42:49.8853413Z Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:42:49.8856115Z Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:42:49.8881537Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... 2025-03-17T17:42:50.4720029Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... 2025-03-17T17:42:50.4722183Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... 2025-03-17T17:42:50.4724345Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... 2025-03-17T17:42:50.4726611Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... 2025-03-17T17:42:50.5721107Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... 2025-03-17T17:42:51.5607903Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... 2025-03-17T17:43:00.1562528Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... 2025-03-17T17:43:00.1750724Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-03-17T17:43:00.2135378Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-03-17T17:43:00.2289944Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-03-17T17:43:00.3264725Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-03-17T17:43:00.3389155Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-03-17T17:43:00.3525515Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-03-17T17:43:00.3671793Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-03-17T17:43:00.3685491Z Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:00.3687767Z Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:00.3711179Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... 2025-03-17T17:43:02.6705623Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... 2025-03-17T17:43:02.9116004Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-03-17T17:43:02.9563676Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-03-17T17:43:03.3908451Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-03-17T17:43:03.4020378Z Submodule path 'third_party/pocketfft': checked out '9d3ab05a7fffbc71a492bc6a17be034e83e8f0fe' 2025-03-17T17:43:03.6586803Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-03-17T17:43:03.6607527Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:03.6609667Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:03.6635330Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2025-03-17T17:43:04.2413999Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2025-03-17T17:43:04.8872478Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-03-17T17:43:04.9562313Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-03-17T17:43:04.9647813Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-03-17T17:43:04.9759231Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-03-17T17:43:05.0102091Z Submodule path 'third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-03-17T17:43:05.0373132Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-03-17T17:43:05.0778731Z Submodule path 'third_party/sleef': checked out '56e1f79cb140fb9326d612d0be06b5250565cade' 2025-03-17T17:43:05.1028278Z Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' 2025-03-17T17:43:05.1044159Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:05.1046194Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:05.1048604Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:05.1051059Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:05.1075475Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2025-03-17T17:43:06.8327077Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2025-03-17T17:43:06.8328110Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2025-03-17T17:43:06.8329098Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2025-03-17T17:43:06.8873923Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-03-17T17:43:06.9025269Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-03-17T17:43:06.9727927Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' 2025-03-17T17:43:07.0002938Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-03-17T17:43:07.0018027Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:07.0041422Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2025-03-17T17:43:07.2185875Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-03-17T17:43:07.2223825Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-03-17T17:43:07.2492840Z Entering 'android/libs/fbjni' 2025-03-17T17:43:07.2529877Z Entering 'third_party/FP16' 2025-03-17T17:43:07.2567665Z Entering 'third_party/FXdiv' 2025-03-17T17:43:07.2604122Z Entering 'third_party/NNPACK' 2025-03-17T17:43:07.2641338Z Entering 'third_party/NVTX' 2025-03-17T17:43:07.2679957Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T17:43:07.2717555Z Entering 'third_party/XNNPACK' 2025-03-17T17:43:07.2770217Z Entering 'third_party/benchmark' 2025-03-17T17:43:07.2808701Z Entering 'third_party/composable_kernel' 2025-03-17T17:43:07.2854201Z Entering 'third_party/cpp-httplib' 2025-03-17T17:43:07.2891195Z Entering 'third_party/cpuinfo' 2025-03-17T17:43:07.2929602Z Entering 'third_party/cudnn_frontend' 2025-03-17T17:43:07.2967631Z Entering 'third_party/cutlass' 2025-03-17T17:43:07.3014341Z Entering 'third_party/eigen' 2025-03-17T17:43:07.3054344Z Entering 'third_party/fbgemm' 2025-03-17T17:43:07.3093668Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:43:07.3129665Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:43:07.3165939Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:43:07.3208713Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:43:07.3244981Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:43:07.3281910Z Entering 'third_party/flash-attention' 2025-03-17T17:43:07.3319550Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:43:07.3363973Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:43:07.3408106Z Entering 'third_party/flatbuffers' 2025-03-17T17:43:07.3448026Z Entering 'third_party/fmt' 2025-03-17T17:43:07.3486338Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:43:07.3524331Z Entering 'third_party/gloo' 2025-03-17T17:43:07.3562347Z Entering 'third_party/googletest' 2025-03-17T17:43:07.3600284Z Entering 'third_party/ideep' 2025-03-17T17:43:07.3637231Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T17:43:07.3682118Z Entering 'third_party/ittapi' 2025-03-17T17:43:07.3719492Z Entering 'third_party/kineto' 2025-03-17T17:43:07.3758255Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:43:07.3795632Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:43:07.3833338Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:43:07.3870639Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:43:07.3907554Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:43:07.3944143Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:43:07.3981733Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:43:07.4017914Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:43:07.4054627Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:43:07.4092193Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:43:07.4129855Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:43:07.4166465Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:43:07.4203904Z Entering 'third_party/kleidiai' 2025-03-17T17:43:07.4242002Z Entering 'third_party/mimalloc' 2025-03-17T17:43:07.4279833Z Entering 'third_party/nlohmann' 2025-03-17T17:43:07.4318540Z Entering 'third_party/onnx' 2025-03-17T17:43:07.4370979Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T17:43:07.4409645Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T17:43:07.4449329Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:43:07.4484691Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:43:07.4520803Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:43:07.4558603Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:43:07.4596954Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:43:07.4632445Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:43:07.4668385Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:43:07.4704264Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:07.4742065Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:07.4780269Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:43:07.4835761Z Entering 'third_party/pocketfft' 2025-03-17T17:43:07.4872355Z Entering 'third_party/protobuf' 2025-03-17T17:43:07.4913954Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:07.4951101Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:07.4990135Z Entering 'third_party/psimd' 2025-03-17T17:43:07.5027493Z Entering 'third_party/pthreadpool' 2025-03-17T17:43:07.5064825Z Entering 'third_party/pybind11' 2025-03-17T17:43:07.5102080Z Entering 'third_party/python-peachpy' 2025-03-17T17:43:07.5138521Z Entering 'third_party/sleef' 2025-03-17T17:43:07.5175775Z Entering 'third_party/tensorpipe' 2025-03-17T17:43:07.5212935Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:07.5249229Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:07.5284874Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:07.5320769Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:07.5356250Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:07.5403897Z ##[endgroup] 2025-03-17T17:43:07.5404396Z ##[group]Persisting credentials for submodules 2025-03-17T17:43:07.5410487Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-03-17T17:43:07.5673398Z Entering 'android/libs/fbjni' 2025-03-17T17:43:07.5722939Z Entering 'third_party/FP16' 2025-03-17T17:43:07.5772660Z Entering 'third_party/FXdiv' 2025-03-17T17:43:07.5820789Z Entering 'third_party/NNPACK' 2025-03-17T17:43:07.5869077Z Entering 'third_party/NVTX' 2025-03-17T17:43:07.5917706Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T17:43:07.5965965Z Entering 'third_party/XNNPACK' 2025-03-17T17:43:07.6029050Z Entering 'third_party/benchmark' 2025-03-17T17:43:07.6077958Z Entering 'third_party/composable_kernel' 2025-03-17T17:43:07.6131519Z Entering 'third_party/cpp-httplib' 2025-03-17T17:43:07.6179775Z Entering 'third_party/cpuinfo' 2025-03-17T17:43:07.6228264Z Entering 'third_party/cudnn_frontend' 2025-03-17T17:43:07.6278247Z Entering 'third_party/cutlass' 2025-03-17T17:43:07.6333656Z Entering 'third_party/eigen' 2025-03-17T17:43:07.6384193Z Entering 'third_party/fbgemm' 2025-03-17T17:43:07.6432653Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:43:07.6481086Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:43:07.6528744Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:43:07.6583656Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:43:07.6631438Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:43:07.6680478Z Entering 'third_party/flash-attention' 2025-03-17T17:43:07.6730430Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:43:07.6784127Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:43:07.6840035Z Entering 'third_party/flatbuffers' 2025-03-17T17:43:07.6890635Z Entering 'third_party/fmt' 2025-03-17T17:43:07.6939642Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:43:07.6988019Z Entering 'third_party/gloo' 2025-03-17T17:43:07.7038207Z Entering 'third_party/googletest' 2025-03-17T17:43:07.7086333Z Entering 'third_party/ideep' 2025-03-17T17:43:07.7133576Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T17:43:07.7190088Z Entering 'third_party/ittapi' 2025-03-17T17:43:07.7238331Z Entering 'third_party/kineto' 2025-03-17T17:43:07.7286527Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:43:07.7334295Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:43:07.7383883Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:43:07.7432355Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:43:07.7480805Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:43:07.7528859Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:43:07.7578759Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:43:07.7627172Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:43:07.7675916Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:43:07.7724719Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:43:07.7776868Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:43:07.7823879Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:43:07.7872615Z Entering 'third_party/kleidiai' 2025-03-17T17:43:07.7920932Z Entering 'third_party/mimalloc' 2025-03-17T17:43:07.7969739Z Entering 'third_party/nlohmann' 2025-03-17T17:43:07.8018734Z Entering 'third_party/onnx' 2025-03-17T17:43:07.8083695Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T17:43:07.8134037Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T17:43:07.8183709Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:43:07.8230792Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:43:07.8278383Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:43:07.8325429Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:43:07.8374140Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:43:07.8421162Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:43:07.8468592Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:43:07.8515377Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:07.8564995Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:07.8613695Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:43:07.8682048Z Entering 'third_party/pocketfft' 2025-03-17T17:43:07.8729525Z Entering 'third_party/protobuf' 2025-03-17T17:43:07.8780505Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:07.8827588Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:07.8876898Z Entering 'third_party/psimd' 2025-03-17T17:43:07.8924344Z Entering 'third_party/pthreadpool' 2025-03-17T17:43:07.8975085Z Entering 'third_party/pybind11' 2025-03-17T17:43:07.9027569Z Entering 'third_party/python-peachpy' 2025-03-17T17:43:07.9079355Z Entering 'third_party/sleef' 2025-03-17T17:43:07.9127445Z Entering 'third_party/tensorpipe' 2025-03-17T17:43:07.9176056Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:07.9224870Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:07.9274110Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:07.9321104Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:07.9370166Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:07.9437136Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-03-17T17:43:07.9706638Z Entering 'android/libs/fbjni' 2025-03-17T17:43:07.9758583Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-03-17T17:43:07.9772790Z Entering 'third_party/FP16' 2025-03-17T17:43:07.9818347Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-03-17T17:43:07.9833066Z Entering 'third_party/FXdiv' 2025-03-17T17:43:07.9880606Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-03-17T17:43:07.9894391Z Entering 'third_party/NNPACK' 2025-03-17T17:43:07.9940703Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-03-17T17:43:07.9955636Z Entering 'third_party/NVTX' 2025-03-17T17:43:08.0002125Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-03-17T17:43:08.0017123Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T17:43:08.0064726Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-03-17T17:43:08.0079199Z Entering 'third_party/XNNPACK' 2025-03-17T17:43:08.0126057Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-03-17T17:43:08.0161254Z Entering 'third_party/benchmark' 2025-03-17T17:43:08.0209423Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-03-17T17:43:08.0224312Z Entering 'third_party/composable_kernel' 2025-03-17T17:43:08.0269737Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-03-17T17:43:08.0292130Z Entering 'third_party/cpp-httplib' 2025-03-17T17:43:08.0338655Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-03-17T17:43:08.0354003Z Entering 'third_party/cpuinfo' 2025-03-17T17:43:08.0399386Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-03-17T17:43:08.0413617Z Entering 'third_party/cudnn_frontend' 2025-03-17T17:43:08.0459390Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-03-17T17:43:08.0473523Z Entering 'third_party/cutlass' 2025-03-17T17:43:08.0522923Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-03-17T17:43:08.0545852Z Entering 'third_party/eigen' 2025-03-17T17:43:08.0592378Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/eigen/config remote.origin.url 2025-03-17T17:43:08.0608388Z Entering 'third_party/fbgemm' 2025-03-17T17:43:08.0654904Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-03-17T17:43:08.0669183Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:43:08.0717795Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/asmjit/config remote.origin.url 2025-03-17T17:43:08.0731611Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:43:08.0778324Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cpuinfo/config remote.origin.url 2025-03-17T17:43:08.0793192Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:43:08.0839580Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cutlass/config remote.origin.url 2025-03-17T17:43:08.0860913Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:43:08.0906899Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.0920611Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:43:08.0967386Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/hipify_torch/config remote.origin.url 2025-03-17T17:43:08.0982930Z Entering 'third_party/flash-attention' 2025-03-17T17:43:08.1029128Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-03-17T17:43:08.1044584Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:43:08.1091250Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-03-17T17:43:08.1110611Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:43:08.1155773Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-03-17T17:43:08.1178163Z Entering 'third_party/flatbuffers' 2025-03-17T17:43:08.1223582Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-03-17T17:43:08.1241676Z Entering 'third_party/fmt' 2025-03-17T17:43:08.1286808Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-03-17T17:43:08.1301881Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:43:08.1347875Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-03-17T17:43:08.1361800Z Entering 'third_party/gloo' 2025-03-17T17:43:08.1407958Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-03-17T17:43:08.1422288Z Entering 'third_party/googletest' 2025-03-17T17:43:08.1467094Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.1482335Z Entering 'third_party/ideep' 2025-03-17T17:43:08.1528491Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-03-17T17:43:08.1541964Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T17:43:08.1586156Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-03-17T17:43:08.1608400Z Entering 'third_party/ittapi' 2025-03-17T17:43:08.1653389Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-03-17T17:43:08.1667738Z Entering 'third_party/kineto' 2025-03-17T17:43:08.1712738Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-03-17T17:43:08.1727043Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:43:08.1773520Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-03-17T17:43:08.1787090Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:43:08.1833107Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-03-17T17:43:08.1848036Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:43:08.1893579Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-03-17T17:43:08.1907744Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:43:08.1954022Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-03-17T17:43:08.1967917Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:43:08.2013603Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-03-17T17:43:08.2026145Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:43:08.2072524Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-03-17T17:43:08.2087880Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:43:08.2133015Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-03-17T17:43:08.2147429Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:43:08.2192576Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.2206877Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:43:08.2252520Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-03-17T17:43:08.2267437Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:43:08.2312349Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-03-17T17:43:08.2327960Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:43:08.2372531Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-03-17T17:43:08.2386334Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:43:08.2430486Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.2446351Z Entering 'third_party/kleidiai' 2025-03-17T17:43:08.2491305Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-03-17T17:43:08.2507395Z Entering 'third_party/mimalloc' 2025-03-17T17:43:08.2552270Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-03-17T17:43:08.2567593Z Entering 'third_party/nlohmann' 2025-03-17T17:43:08.2612037Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-03-17T17:43:08.2628093Z Entering 'third_party/onnx' 2025-03-17T17:43:08.2675174Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-03-17T17:43:08.2705762Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T17:43:08.2751840Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-03-17T17:43:08.2767697Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T17:43:08.2814178Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-03-17T17:43:08.2829302Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:43:08.2873946Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-03-17T17:43:08.2887496Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:43:08.2932257Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.2946530Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:43:08.2991287Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-03-17T17:43:08.3004845Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:43:08.3050448Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-03-17T17:43:08.3064978Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:43:08.3110089Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-03-17T17:43:08.3123525Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:43:08.3167595Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-03-17T17:43:08.3181156Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:43:08.3226081Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-03-17T17:43:08.3239553Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:08.3284447Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-03-17T17:43:08.3300263Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:08.3345571Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-03-17T17:43:08.3361003Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:43:08.3405151Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-03-17T17:43:08.3437948Z Entering 'third_party/pocketfft' 2025-03-17T17:43:08.3483851Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-03-17T17:43:08.3498133Z Entering 'third_party/protobuf' 2025-03-17T17:43:08.3543776Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-03-17T17:43:08.3561086Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:08.3605561Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-03-17T17:43:08.3619280Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:08.3665434Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.3680738Z Entering 'third_party/psimd' 2025-03-17T17:43:08.3725680Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-03-17T17:43:08.3740974Z Entering 'third_party/pthreadpool' 2025-03-17T17:43:08.3785984Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-03-17T17:43:08.3800281Z Entering 'third_party/pybind11' 2025-03-17T17:43:08.3846207Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-03-17T17:43:08.3860691Z Entering 'third_party/python-peachpy' 2025-03-17T17:43:08.3905348Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-03-17T17:43:08.3920075Z Entering 'third_party/sleef' 2025-03-17T17:43:08.3965091Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-03-17T17:43:08.3981166Z Entering 'third_party/tensorpipe' 2025-03-17T17:43:08.4026888Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-03-17T17:43:08.4042680Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:08.4087476Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-03-17T17:43:08.4100949Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:08.4146067Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-03-17T17:43:08.4159818Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:08.4203989Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-03-17T17:43:08.4218101Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:08.4263006Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-03-17T17:43:08.4276138Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:08.4321138Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-03-17T17:43:08.5227653Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-03-17T17:43:08.5490334Z Entering 'android/libs/fbjni' 2025-03-17T17:43:08.5528001Z Entering 'third_party/FP16' 2025-03-17T17:43:08.5565091Z Entering 'third_party/FXdiv' 2025-03-17T17:43:08.5601958Z Entering 'third_party/NNPACK' 2025-03-17T17:43:08.5639314Z Entering 'third_party/NVTX' 2025-03-17T17:43:08.5676423Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T17:43:08.5713244Z Entering 'third_party/XNNPACK' 2025-03-17T17:43:08.5764514Z Entering 'third_party/benchmark' 2025-03-17T17:43:08.5802505Z Entering 'third_party/composable_kernel' 2025-03-17T17:43:08.5844750Z Entering 'third_party/cpp-httplib' 2025-03-17T17:43:08.5881478Z Entering 'third_party/cpuinfo' 2025-03-17T17:43:08.5918635Z Entering 'third_party/cudnn_frontend' 2025-03-17T17:43:08.5955835Z Entering 'third_party/cutlass' 2025-03-17T17:43:08.5999262Z Entering 'third_party/eigen' 2025-03-17T17:43:08.6038383Z Entering 'third_party/fbgemm' 2025-03-17T17:43:08.6075740Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:43:08.6113474Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:43:08.6149548Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:43:08.6190658Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:43:08.6226089Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:43:08.6263884Z Entering 'third_party/flash-attention' 2025-03-17T17:43:08.6301120Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:43:08.6343552Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:43:08.6388467Z Entering 'third_party/flatbuffers' 2025-03-17T17:43:08.6428716Z Entering 'third_party/fmt' 2025-03-17T17:43:08.6466628Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:43:08.6503517Z Entering 'third_party/gloo' 2025-03-17T17:43:08.6541273Z Entering 'third_party/googletest' 2025-03-17T17:43:08.6578765Z Entering 'third_party/ideep' 2025-03-17T17:43:08.6614934Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T17:43:08.6659572Z Entering 'third_party/ittapi' 2025-03-17T17:43:08.6696109Z Entering 'third_party/kineto' 2025-03-17T17:43:08.6733250Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:43:08.6770248Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:43:08.6808331Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:43:08.6845340Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:43:08.6881979Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:43:08.6917784Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:43:08.6956328Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:43:08.6991671Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:43:08.7028606Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:43:08.7067178Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:43:08.7104621Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:43:08.7141202Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:43:08.7178284Z Entering 'third_party/kleidiai' 2025-03-17T17:43:08.7215495Z Entering 'third_party/mimalloc' 2025-03-17T17:43:08.7252617Z Entering 'third_party/nlohmann' 2025-03-17T17:43:08.7290402Z Entering 'third_party/onnx' 2025-03-17T17:43:08.7342562Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T17:43:08.7381831Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T17:43:08.7420136Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:43:08.7456029Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:43:08.7491405Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:43:08.7526933Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:43:08.7564102Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:43:08.7599767Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:43:08.7635808Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:43:08.7671103Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:08.7710459Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:08.7748099Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:43:08.7803318Z Entering 'third_party/pocketfft' 2025-03-17T17:43:08.7840634Z Entering 'third_party/protobuf' 2025-03-17T17:43:08.7881478Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:08.7917622Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:08.7957467Z Entering 'third_party/psimd' 2025-03-17T17:43:08.7994017Z Entering 'third_party/pthreadpool' 2025-03-17T17:43:08.8030637Z Entering 'third_party/pybind11' 2025-03-17T17:43:08.8067799Z Entering 'third_party/python-peachpy' 2025-03-17T17:43:08.8104523Z Entering 'third_party/sleef' 2025-03-17T17:43:08.8141707Z Entering 'third_party/tensorpipe' 2025-03-17T17:43:08.8178798Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:08.8214525Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:08.8249924Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:08.8285538Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:08.8321469Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:08.8372502Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-03-17T17:43:08.8637426Z Entering 'android/libs/fbjni' 2025-03-17T17:43:08.8675142Z Entering 'third_party/FP16' 2025-03-17T17:43:08.8712708Z Entering 'third_party/FXdiv' 2025-03-17T17:43:08.8749974Z Entering 'third_party/NNPACK' 2025-03-17T17:43:08.8786625Z Entering 'third_party/NVTX' 2025-03-17T17:43:08.8823930Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T17:43:08.8861435Z Entering 'third_party/XNNPACK' 2025-03-17T17:43:08.8913241Z Entering 'third_party/benchmark' 2025-03-17T17:43:08.8951037Z Entering 'third_party/composable_kernel' 2025-03-17T17:43:08.8994907Z Entering 'third_party/cpp-httplib' 2025-03-17T17:43:08.9033962Z Entering 'third_party/cpuinfo' 2025-03-17T17:43:08.9072274Z Entering 'third_party/cudnn_frontend' 2025-03-17T17:43:08.9110077Z Entering 'third_party/cutlass' 2025-03-17T17:43:08.9157275Z Entering 'third_party/eigen' 2025-03-17T17:43:08.9196858Z Entering 'third_party/fbgemm' 2025-03-17T17:43:08.9234336Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T17:43:08.9270732Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T17:43:08.9307488Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T17:43:08.9350884Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T17:43:08.9386807Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T17:43:08.9423705Z Entering 'third_party/flash-attention' 2025-03-17T17:43:08.9462460Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T17:43:08.9504162Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T17:43:08.9550155Z Entering 'third_party/flatbuffers' 2025-03-17T17:43:08.9590285Z Entering 'third_party/fmt' 2025-03-17T17:43:08.9627580Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T17:43:08.9665259Z Entering 'third_party/gloo' 2025-03-17T17:43:08.9702850Z Entering 'third_party/googletest' 2025-03-17T17:43:08.9741328Z Entering 'third_party/ideep' 2025-03-17T17:43:08.9778620Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T17:43:08.9822117Z Entering 'third_party/ittapi' 2025-03-17T17:43:08.9860080Z Entering 'third_party/kineto' 2025-03-17T17:43:08.9898374Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T17:43:08.9934421Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T17:43:08.9972830Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T17:43:09.0008935Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T17:43:09.0045997Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T17:43:09.0081252Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T17:43:09.0119317Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T17:43:09.0156092Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T17:43:09.0192477Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T17:43:09.0229528Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T17:43:09.0267699Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T17:43:09.0303155Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T17:43:09.0346886Z Entering 'third_party/kleidiai' 2025-03-17T17:43:09.0382666Z Entering 'third_party/mimalloc' 2025-03-17T17:43:09.0419205Z Entering 'third_party/nlohmann' 2025-03-17T17:43:09.0459013Z Entering 'third_party/onnx' 2025-03-17T17:43:09.0512155Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T17:43:09.0551213Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T17:43:09.0589690Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T17:43:09.0625232Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T17:43:09.0662009Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T17:43:09.0697251Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T17:43:09.0733778Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T17:43:09.0769514Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T17:43:09.0804627Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T17:43:09.0840370Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T17:43:09.0878492Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T17:43:09.0916410Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T17:43:09.0971165Z Entering 'third_party/pocketfft' 2025-03-17T17:43:09.1008731Z Entering 'third_party/protobuf' 2025-03-17T17:43:09.1050616Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T17:43:09.1086964Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T17:43:09.1125320Z Entering 'third_party/psimd' 2025-03-17T17:43:09.1163379Z Entering 'third_party/pthreadpool' 2025-03-17T17:43:09.1200457Z Entering 'third_party/pybind11' 2025-03-17T17:43:09.1238409Z Entering 'third_party/python-peachpy' 2025-03-17T17:43:09.1275647Z Entering 'third_party/sleef' 2025-03-17T17:43:09.1313426Z Entering 'third_party/tensorpipe' 2025-03-17T17:43:09.1483005Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T17:43:09.1518898Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T17:43:09.1554525Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T17:43:09.1591174Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T17:43:09.1626772Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T17:43:09.1678568Z ##[endgroup] 2025-03-17T17:43:09.1709866Z [command]/usr/bin/git log -1 --format=%H 2025-03-17T17:43:09.1730015Z 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:43:09.1902528Z Prepare all required actions 2025-03-17T17:43:09.1903149Z Getting action download info 2025-03-17T17:43:09.3465478Z ##[group]Run ./.github/actions/setup-linux 2025-03-17T17:43:09.3465829Z env: 2025-03-17T17:43:09.3466071Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:09.3466583Z ##[endgroup] 2025-03-17T17:43:09.3516449Z ##[group]Run set -euo pipefail 2025-03-17T17:43:09.3516825Z set -euo pipefail 2025-03-17T17:43:09.3517144Z function get_ec2_metadata() { 2025-03-17T17:43:09.3517555Z  # Pulled from instance metadata endpoint for EC2 2025-03-17T17:43:09.3518224Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2025-03-17T17:43:09.3518821Z  category=$1 2025-03-17T17:43:09.3519211Z  # If it is GCP runner (runner name contains gcp), do not run this 2025-03-17T17:43:09.3519680Z  runner_name_str=i-0287a0cab9cae3fa7 2025-03-17T17:43:09.3520082Z  if [[ -f /.inarc ]]; then 2025-03-17T17:43:09.3520455Z  echo "ARC Runner, no info on ec2 metadata" 2025-03-17T17:43:09.3520874Z  elif [[ $runner_name_str == *"gcp"* ]]; then 2025-03-17T17:43:09.3521375Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2025-03-17T17:43:09.3521848Z  else 2025-03-17T17:43:09.3522751Z  curl -H "X-aws-ec2-metadata-token: $(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")" -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2025-03-17T17:43:09.3523703Z  fi 2025-03-17T17:43:09.3523940Z } 2025-03-17T17:43:09.3524220Z echo "ami-id: $(get_ec2_metadata ami-id)" 2025-03-17T17:43:09.3524673Z echo "instance-id: $(get_ec2_metadata instance-id)" 2025-03-17T17:43:09.3525179Z echo "instance-type: $(get_ec2_metadata instance-type)" 2025-03-17T17:43:09.3525620Z echo "system info $(uname -a)" 2025-03-17T17:43:09.3532673Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:09.3533077Z env: 2025-03-17T17:43:09.3533314Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:09.3533596Z ##[endgroup] 2025-03-17T17:43:09.3665961Z ami-id: ami-08b5b3a93ed654d19 2025-03-17T17:43:09.3754737Z instance-id: i-0287a0cab9cae3fa7 2025-03-17T17:43:09.3841198Z instance-type: c5.2xlarge 2025-03-17T17:43:09.3850267Z system info Linux ip-10-0-54-109.ec2.internal 6.1.129-138.220.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Feb 25 22:18:43 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 2025-03-17T17:43:09.3881759Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:43:09.3882744Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:43:09.3888525Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:09.3888939Z env: 2025-03-17T17:43:09.3889179Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:09.3889455Z ##[endgroup] 2025-03-17T17:43:09.3949145Z ##[group]Run if systemctl is-active --quiet docker; then 2025-03-17T17:43:09.3949628Z if systemctl is-active --quiet docker; then 2025-03-17T17:43:09.3950065Z  echo "Docker daemon is running..."; 2025-03-17T17:43:09.3950419Z else 2025-03-17T17:43:09.3950805Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2025-03-17T17:43:09.3951265Z fi 2025-03-17T17:43:09.3956448Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:09.3956863Z env: 2025-03-17T17:43:09.3957110Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:09.3957398Z ##[endgroup] 2025-03-17T17:43:09.4023618Z Docker daemon is running... 2025-03-17T17:43:09.4073550Z ##[group]Run nick-fields/retry@v3.0.0 2025-03-17T17:43:09.4073876Z with: 2025-03-17T17:43:09.4074103Z shell: bash 2025-03-17T17:43:09.4074493Z timeout_minutes: 5 2025-03-17T17:43:09.4074766Z max_attempts: 3 2025-03-17T17:43:09.4075034Z retry_wait_seconds: 30 2025-03-17T17:43:09.4077426Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" # For LF Runners we need to make sure we also login to Meta's ECR docker registry too. META_AWS_ACCOUNT_ID=308535385114 if [ "$AWS_ACCOUNT_ID" != "$META_AWS_ACCOUNT_ID" ] ; then aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$META_AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" fi 2025-03-17T17:43:09.4079978Z polling_interval_seconds: 1 2025-03-17T17:43:09.4080295Z warning_on_retry: true 2025-03-17T17:43:09.4080592Z continue_on_error: false 2025-03-17T17:43:09.4080864Z env: 2025-03-17T17:43:09.4081107Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:09.4081405Z AWS_RETRY_MODE: standard 2025-03-17T17:43:09.4081700Z AWS_MAX_ATTEMPTS: 5 2025-03-17T17:43:09.4081988Z AWS_DEFAULT_REGION: us-east-1 2025-03-17T17:43:09.4082296Z ##[endgroup] 2025-03-17T17:43:10.5740876Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-03-17T17:43:10.5741807Z Configure a credential helper to remove this warning. See 2025-03-17T17:43:10.5742598Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-03-17T17:43:10.5743024Z 2025-03-17T17:43:10.5743133Z Login Succeeded 2025-03-17T17:43:11.5524308Z Command completed after 1 attempt(s). 2025-03-17T17:43:11.5586223Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-03-17T17:43:11.5586933Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-03-17T17:43:11.5587447Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-03-17T17:43:11.5594006Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:11.5594413Z env: 2025-03-17T17:43:11.5594661Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:11.5594976Z ##[endgroup] 2025-03-17T17:43:11.5681543Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-03-17T17:43:11.5682167Z # ignore expansion of "docker ps -q" since it could be empty 2025-03-17T17:43:11.5682626Z # shellcheck disable=SC2046 2025-03-17T17:43:11.5682989Z docker stop $(docker ps -q) || true 2025-03-17T17:43:11.5683367Z # Prune all of the docker images 2025-03-17T17:43:11.5683725Z docker system prune -af 2025-03-17T17:43:11.5689008Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:11.5689419Z env: 2025-03-17T17:43:11.5689657Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:11.5689945Z ##[endgroup] 2025-03-17T17:43:11.5904023Z "docker stop" requires at least 1 argument. 2025-03-17T17:43:11.5904468Z See 'docker stop --help'. 2025-03-17T17:43:11.5904670Z 2025-03-17T17:43:11.5904863Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-03-17T17:43:11.5905166Z 2025-03-17T17:43:11.5905287Z Stop one or more running containers 2025-03-17T17:43:11.6046716Z Total reclaimed space: 0B 2025-03-17T17:43:11.6092428Z ##[group]Run set +e 2025-03-17T17:43:11.6092923Z set +e 2025-03-17T17:43:11.6093335Z set -x 2025-03-17T17:43:11.6093727Z  2025-03-17T17:43:11.6094161Z PT_DOMAIN=download.pytorch.org 2025-03-17T17:43:11.6095233Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2025-03-17T17:43:11.6096602Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2025-03-17T17:43:11.6097586Z # one is returned at random 2025-03-17T17:43:11.6098317Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2025-03-17T17:43:11.6099021Z  2025-03-17T17:43:11.6099711Z if [ -z "${RESOLVED_IP}" ]; then 2025-03-17T17:43:11.6100526Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2025-03-17T17:43:11.6101484Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2025-03-17T17:43:11.6102230Z  2025-03-17T17:43:11.6102857Z  if [ -z "${RESOLVED_IP}" ]; then 2025-03-17T17:43:11.6103573Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2025-03-17T17:43:11.6104282Z  exit 1 2025-03-17T17:43:11.6104723Z  fi 2025-03-17T17:43:11.6105086Z fi 2025-03-17T17:43:11.6105475Z  2025-03-17T17:43:11.6105951Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2025-03-17T17:43:11.6106777Z  # Clean up any old records first 2025-03-17T17:43:11.6107463Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2025-03-17T17:43:11.6108098Z fi 2025-03-17T17:43:11.6108493Z  2025-03-17T17:43:11.6109106Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2025-03-17T17:43:11.6109879Z cat /etc/hosts 2025-03-17T17:43:11.6118211Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:11.6118870Z env: 2025-03-17T17:43:11.6119250Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:11.6119743Z ##[endgroup] 2025-03-17T17:43:11.6148021Z + PT_DOMAIN=download.pytorch.org 2025-03-17T17:43:11.6153237Z ++ dig -4 +short download.pytorch.org 2025-03-17T17:43:11.6153859Z ++ tail -n1 2025-03-17T17:43:11.6587120Z + RESOLVED_IP=18.160.10.36 2025-03-17T17:43:11.6587458Z + '[' -z 18.160.10.36 ']' 2025-03-17T17:43:11.6587780Z + grep -r download.pytorch.org /etc/hosts 2025-03-17T17:43:11.6601248Z + echo '18.160.10.36 download.pytorch.org' 2025-03-17T17:43:11.6601652Z + sudo tee -a /etc/hosts 2025-03-17T17:43:12.0724706Z 18.160.10.36 download.pytorch.org 2025-03-17T17:43:12.0740168Z + cat /etc/hosts 2025-03-17T17:43:12.0747517Z 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2025-03-17T17:43:12.0753942Z ::1 localhost6 localhost6.localdomain6 2025-03-17T17:43:12.0754415Z 18.160.10.36 download.pytorch.org 2025-03-17T17:43:12.0939391Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2025-03-17T17:43:12.0940102Z with: 2025-03-17T17:43:12.0940827Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.0941668Z docker-build-dir: .ci/docker 2025-03-17T17:43:12.0941981Z working-directory: . 2025-03-17T17:43:12.0942364Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.0942798Z force-push: false 2025-03-17T17:43:12.0943063Z env: 2025-03-17T17:43:12.0943300Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:12.0943588Z ##[endgroup] 2025-03-17T17:43:12.0974412Z ##[group]Run set -ex 2025-03-17T17:43:12.0974744Z set -ex 2025-03-17T17:43:12.0974981Z  2025-03-17T17:43:12.0975412Z # If the docker build directory or the build script doesn't exist, the action will 2025-03-17T17:43:12.0976177Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-03-17T17:43:12.0976786Z # job could then download the pre-built image as usual 2025-03-17T17:43:12.0977350Z if [[ ! -d "${DOCKER_BUILD_DIR}" ]] || [[ ! -f "${DOCKER_BUILD_DIR}/build.sh" ]]; then 2025-03-17T17:43:12.0977857Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0978330Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0978769Z  2025-03-17T17:43:12.0979149Z  echo "There is no Docker build script in ${REPO_NAME} repo, skipping..." 2025-03-17T17:43:12.0979624Z  exit 0 2025-03-17T17:43:12.0979871Z else 2025-03-17T17:43:12.0980147Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0980501Z fi 2025-03-17T17:43:12.0980732Z  2025-03-17T17:43:12.0981094Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-03-17T17:43:12.0981738Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-03-17T17:43:12.0982309Z  # use it as it is, but first let's extract the tag 2025-03-17T17:43:12.0982952Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-03-17T17:43:12.0983498Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0984022Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0984463Z else 2025-03-17T17:43:12.0984808Z  DOCKER_TAG=$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-03-17T17:43:12.0985307Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0985989Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.0986703Z fi 2025-03-17T17:43:12.0993443Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:12.0993838Z env: 2025-03-17T17:43:12.0994073Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:12.0994354Z REPO_NAME: pytorch 2025-03-17T17:43:12.0995084Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.0995885Z DOCKER_BUILD_DIR: .ci/docker 2025-03-17T17:43:12.0996275Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.0996681Z ##[endgroup] 2025-03-17T17:43:12.1019165Z + [[ ! -d .ci/docker ]] 2025-03-17T17:43:12.1019480Z + [[ ! -f .ci/docker/build.sh ]] 2025-03-17T17:43:12.1019781Z + echo skip=false 2025-03-17T17:43:12.1020779Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2025-03-17T17:43:12.1025607Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1026455Z ++ awk -F '[:,]' '{print $2}' 2025-03-17T17:43:12.1045489Z + DOCKER_TAG=70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1045983Z + echo docker-tag=70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1047096Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1082005Z ##[group]Run set +e 2025-03-17T17:43:12.1082329Z set +e 2025-03-17T17:43:12.1082582Z set -x 2025-03-17T17:43:12.1082824Z  2025-03-17T17:43:12.1083054Z login() { 2025-03-17T17:43:12.1083558Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-03-17T17:43:12.1084124Z } 2025-03-17T17:43:12.1084360Z  2025-03-17T17:43:12.1084598Z retry () { 2025-03-17T17:43:12.1084907Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-03-17T17:43:12.1085256Z } 2025-03-17T17:43:12.1085487Z  2025-03-17T17:43:12.1085748Z retry login "${DOCKER_REGISTRY}" 2025-03-17T17:43:12.1086095Z  2025-03-17T17:43:12.1086343Z START_TIME=$(date +%s) 2025-03-17T17:43:12.1086665Z # Wait up to 120 minutes 2025-03-17T17:43:12.1087063Z while [[ $(( $(date +%s) - 7200 )) -lt $START_TIME ]]; do 2025-03-17T17:43:12.1087598Z  # Check if image already exists, if it does then skip building it 2025-03-17T17:43:12.1088128Z  if docker manifest inspect "${DOCKER_IMAGE}"; then 2025-03-17T17:43:12.1088512Z  exit 0 2025-03-17T17:43:12.1088768Z  fi 2025-03-17T17:43:12.1089005Z  2025-03-17T17:43:12.1089421Z  # NB: This flag is used by Docker build workflow to push the image to ECR, so we can 2025-03-17T17:43:12.1090129Z  # use this to differentiate between the Docker build and regular build jobs. For the 2025-03-17T17:43:12.1090837Z  # latter, it will wait for the Docker images to become available before continuing 2025-03-17T17:43:12.1091534Z  if [ "${DOCKER_PUSH:-false}" == "true" ]; then 2025-03-17T17:43:12.1091969Z  # It's a Docker build job, let's build the image 2025-03-17T17:43:12.1092346Z  break 2025-03-17T17:43:12.1092601Z  else 2025-03-17T17:43:12.1092970Z  # It's a regular build job, wait for the image to become available 2025-03-17T17:43:12.1093413Z  sleep 300 2025-03-17T17:43:12.1093679Z  fi 2025-03-17T17:43:12.1093916Z done 2025-03-17T17:43:12.1094150Z  2025-03-17T17:43:12.1094528Z # NB: This part requires a full checkout. Otherwise, the merge base will 2025-03-17T17:43:12.1095140Z # be empty. The default action would be to continue rebuild the image 2025-03-17T17:43:12.1095695Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2025-03-17T17:43:12.1096184Z  # if we're on the base branch then use the parent commit 2025-03-17T17:43:12.1096622Z  MERGE_BASE=$(git rev-parse HEAD~) 2025-03-17T17:43:12.1096963Z else 2025-03-17T17:43:12.1097320Z  # otherwise we're on a PR, so use the most recent base commit 2025-03-17T17:43:12.1097831Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2025-03-17T17:43:12.1098221Z fi 2025-03-17T17:43:12.1098447Z  2025-03-17T17:43:12.1098833Z if [[ -z "${MERGE_BASE}" ]]; then 2025-03-17T17:43:12.1099218Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.1099578Z  2025-03-17T17:43:12.1100079Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2025-03-17T17:43:12.1100674Z  exit 0 2025-03-17T17:43:12.1100926Z fi 2025-03-17T17:43:12.1101289Z  2025-03-17T17:43:12.1101633Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2025-03-17T17:43:12.1102366Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2025-03-17T17:43:12.1103002Z  exit 1 2025-03-17T17:43:12.1103256Z fi 2025-03-17T17:43:12.1103492Z  2025-03-17T17:43:12.1103883Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2025-03-17T17:43:12.1104583Z # If no image exists but the hash is the same as the previous hash then we should error out here 2025-03-17T17:43:12.1105208Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2025-03-17T17:43:12.1105929Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2025-03-17T17:43:12.1106844Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2025-03-17T17:43:12.1107338Z fi 2025-03-17T17:43:12.1107658Z  2025-03-17T17:43:12.1107943Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-03-17T17:43:12.1114013Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:12.1114420Z env: 2025-03-17T17:43:12.1114659Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:12.1114958Z DOCKER_BUILD_DIR: .ci/docker 2025-03-17T17:43:12.1115320Z BASE_REVISION: 7d50234dff8a52633fd546660a133b6f1ab443a9 2025-03-17T17:43:12.1116160Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1116980Z DOCKER_TAG: 70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.1117450Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.1117868Z DOCKER_PUSH: 2025-03-17T17:43:12.1118116Z ##[endgroup] 2025-03-17T17:43:12.1139877Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.1140353Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.1142297Z + aws ecr get-login-password --region us-east-1 2025-03-17T17:43:12.1143421Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:12.6216244Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-03-17T17:43:12.6216912Z Configure a credential helper to remove this warning. See 2025-03-17T17:43:12.6217675Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-03-17T17:43:12.6218094Z 2025-03-17T17:43:12.6218217Z Login Succeeded 2025-03-17T17:43:12.6230955Z ++ date +%s 2025-03-17T17:43:12.6239604Z + START_TIME=1742233392 2025-03-17T17:43:12.6242324Z ++ date +%s 2025-03-17T17:43:12.6250074Z + [[ 1742226192 -lt 1742233392 ]] 2025-03-17T17:43:12.6250974Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:12.8649178Z { 2025-03-17T17:43:12.8649496Z "schemaVersion": 2, 2025-03-17T17:43:12.8650253Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2025-03-17T17:43:12.8650846Z "config": { 2025-03-17T17:43:12.8651248Z "mediaType": "application/vnd.docker.container.image.v1+json", 2025-03-17T17:43:12.8652025Z "size": 41574, 2025-03-17T17:43:12.8652634Z "digest": "sha256:e6cba42f176eca517d1f8851c8f196198dcfd7dec3dbfdd0d4505d8ee86a6e4a" 2025-03-17T17:43:12.8653283Z }, 2025-03-17T17:43:12.8653604Z "layers": [ 2025-03-17T17:43:12.8654023Z { 2025-03-17T17:43:12.8654548Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8655026Z "size": 28583948, 2025-03-17T17:43:12.8655863Z "digest": "sha256:86e5016c269355b382c9cabab4f6646d56d75914f20d545289970436dae431b1" 2025-03-17T17:43:12.8656377Z }, 2025-03-17T17:43:12.8656597Z { 2025-03-17T17:43:12.8657147Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8657606Z "size": 1893, 2025-03-17T17:43:12.8658046Z "digest": "sha256:9fca0f71e106d8cf57d949ce154c5371da2acf3598248475ba4e5091b5b26660" 2025-03-17T17:43:12.8658567Z }, 2025-03-17T17:43:12.8658784Z { 2025-03-17T17:43:12.8659136Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8659693Z "size": 319464641, 2025-03-17T17:43:12.8660153Z "digest": "sha256:48fc79ac51c764aa77f06135cd5bd73b28f4b38b0dbe8bf6bbfcda7237f5cecc" 2025-03-17T17:43:12.8660664Z }, 2025-03-17T17:43:12.8660878Z { 2025-03-17T17:43:12.8661216Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8661660Z "size": 863, 2025-03-17T17:43:12.8662103Z "digest": "sha256:1502f91aae283efadafd63a8fdaf08e5e7c04b68c32ebf2a5f9ff75847bd85e2" 2025-03-17T17:43:12.8662781Z }, 2025-03-17T17:43:12.8663044Z { 2025-03-17T17:43:12.8663397Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8663858Z "size": 79404639, 2025-03-17T17:43:12.8664596Z "digest": "sha256:122d0dd9d0af6f8a67fbf2bbb87ebcbf37758a9062f85021c13a3e63b74bf358" 2025-03-17T17:43:12.8665501Z }, 2025-03-17T17:43:12.8665727Z { 2025-03-17T17:43:12.8666139Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8667012Z "size": 703, 2025-03-17T17:43:12.8667463Z "digest": "sha256:7937e7d835eb23b166b5f1dac2ee19368d07a27966c35f26c2a8c3466e20a664" 2025-03-17T17:43:12.8667963Z }, 2025-03-17T17:43:12.8668176Z { 2025-03-17T17:43:12.8668527Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8668976Z "size": 1263, 2025-03-17T17:43:12.8669407Z "digest": "sha256:51c49d389de83c233014fe6ae83c67700002a45db6746e63fb618891125a125b" 2025-03-17T17:43:12.8669907Z }, 2025-03-17T17:43:12.8670109Z { 2025-03-17T17:43:12.8670482Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8670931Z "size": 484, 2025-03-17T17:43:12.8671383Z "digest": "sha256:0b6a0950cc92894e1d1ee46e71bf974b40ef98dcad8ea3538fb0f1311dbd922e" 2025-03-17T17:43:12.8671893Z }, 2025-03-17T17:43:12.8672111Z { 2025-03-17T17:43:12.8672606Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8673056Z "size": 110, 2025-03-17T17:43:12.8673494Z "digest": "sha256:a7271d4daa6903acee6d8c431336279e9abfb0edd53935843795d3e5274d43d4" 2025-03-17T17:43:12.8673987Z }, 2025-03-17T17:43:12.8674207Z { 2025-03-17T17:43:12.8674562Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8675014Z "size": 4183, 2025-03-17T17:43:12.8675464Z "digest": "sha256:58944619eebfcb4720d162f8663a86e4ceb1fc7babfcedbee2c1db2ce01a6f05" 2025-03-17T17:43:12.8675979Z }, 2025-03-17T17:43:12.8676197Z { 2025-03-17T17:43:12.8676550Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8676996Z "size": 1860, 2025-03-17T17:43:12.8677440Z "digest": "sha256:36598b9d00be2ab11a257741cced68c52f7a8793fed17bd961481f2ab9296297" 2025-03-17T17:43:12.8677942Z }, 2025-03-17T17:43:12.8678153Z { 2025-03-17T17:43:12.8678503Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8678951Z "size": 700, 2025-03-17T17:43:12.8679400Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8679916Z }, 2025-03-17T17:43:12.8680114Z { 2025-03-17T17:43:12.8680465Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8680907Z "size": 479, 2025-03-17T17:43:12.8681345Z "digest": "sha256:3d159d7e02e0f6a5dcd04af1a44e13a8c66b737fee0c7fbcc4462c910b534c4b" 2025-03-17T17:43:12.8681854Z }, 2025-03-17T17:43:12.8682069Z { 2025-03-17T17:43:12.8682419Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8682868Z "size": 2834309849, 2025-03-17T17:43:12.8683406Z "digest": "sha256:708acdd578e70678a9e5e1f357d3107d85faa5b82ae455fb1e1415c78dd46fdc" 2025-03-17T17:43:12.8683917Z }, 2025-03-17T17:43:12.8684133Z { 2025-03-17T17:43:12.8684482Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8684931Z "size": 32, 2025-03-17T17:43:12.8685365Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8685869Z }, 2025-03-17T17:43:12.8686083Z { 2025-03-17T17:43:12.8686420Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8686863Z "size": 381, 2025-03-17T17:43:12.8687299Z "digest": "sha256:ee561d63497b661ddfbbc61f59adb40a63215aaa9ffa6d7f63879689961c329b" 2025-03-17T17:43:12.8687806Z }, 2025-03-17T17:43:12.8688019Z { 2025-03-17T17:43:12.8688372Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8688816Z "size": 104, 2025-03-17T17:43:12.8689247Z "digest": "sha256:6869a34d30857e40d0ec703a2f3067dffbfdf99f14b6743bf2c8870665d7bf59" 2025-03-17T17:43:12.8689747Z }, 2025-03-17T17:43:12.8689963Z { 2025-03-17T17:43:12.8690314Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8690756Z "size": 231, 2025-03-17T17:43:12.8691197Z "digest": "sha256:750d0a5ea8af3ac1e1ce71d2ab33a052928fdc9ed8803b7967894736e5643640" 2025-03-17T17:43:12.8691704Z }, 2025-03-17T17:43:12.8691914Z { 2025-03-17T17:43:12.8692263Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8692695Z "size": 3766270, 2025-03-17T17:43:12.8693140Z "digest": "sha256:643ddef2c79407842311da17f1c34c3d4f7e4c95bd9f1b806cc5b7fcc530826b" 2025-03-17T17:43:12.8693646Z }, 2025-03-17T17:43:12.8693860Z { 2025-03-17T17:43:12.8694209Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8694660Z "size": 1946, 2025-03-17T17:43:12.8695095Z "digest": "sha256:be4499a3cc2ec7c14f315fe3499a0561657954f6db31cba57b089313db1582a2" 2025-03-17T17:43:12.8695595Z }, 2025-03-17T17:43:12.8695805Z { 2025-03-17T17:43:12.8696159Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8696600Z "size": 105, 2025-03-17T17:43:12.8697029Z "digest": "sha256:5293b32ec021f767c9f8862b34bb8acab2ee763e988c764d630e58d5827816f1" 2025-03-17T17:43:12.8697607Z }, 2025-03-17T17:43:12.8697820Z { 2025-03-17T17:43:12.8698170Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8698616Z "size": 802, 2025-03-17T17:43:12.8699045Z "digest": "sha256:107d9d8d8628b89a3b0b82d77581802ec187d393c2f39d86b6f1aa019cf1db46" 2025-03-17T17:43:12.8699529Z }, 2025-03-17T17:43:12.8699743Z { 2025-03-17T17:43:12.8700098Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8700546Z "size": 32, 2025-03-17T17:43:12.8700985Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8701496Z }, 2025-03-17T17:43:12.8701709Z { 2025-03-17T17:43:12.8702061Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8702508Z "size": 104, 2025-03-17T17:43:12.8702936Z "digest": "sha256:8d95eba27708e2944655d753a3646bbb7c29f2922800f40ed2158840d4e8ef6e" 2025-03-17T17:43:12.8703435Z }, 2025-03-17T17:43:12.8703648Z { 2025-03-17T17:43:12.8703999Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8704445Z "size": 504, 2025-03-17T17:43:12.8704884Z "digest": "sha256:2c2d5cb4739ae78fb726da3a4541d6440e9e1b58e765ae618d6a81e43f3c5fa4" 2025-03-17T17:43:12.8705389Z }, 2025-03-17T17:43:12.8705592Z { 2025-03-17T17:43:12.8705943Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8706482Z "size": 121478524, 2025-03-17T17:43:12.8706945Z "digest": "sha256:97b06d62430aaea3d9a8f1697007acac82833f61cf319c78ba72d6a6fab4069c" 2025-03-17T17:43:12.8707454Z }, 2025-03-17T17:43:12.8707671Z { 2025-03-17T17:43:12.8708109Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8708563Z "size": 109, 2025-03-17T17:43:12.8709003Z "digest": "sha256:3308da525a945545647eb9aaa97eec27ec1dc69b5f271cfac1aa89915d0d0d9d" 2025-03-17T17:43:12.8709518Z }, 2025-03-17T17:43:12.8709737Z { 2025-03-17T17:43:12.8710091Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8710547Z "size": 489, 2025-03-17T17:43:12.8710984Z "digest": "sha256:1e826e5aad1591a19b232f90db53699d12dedc3cbd19615b3a6d33de719ae57f" 2025-03-17T17:43:12.8711491Z }, 2025-03-17T17:43:12.8711706Z { 2025-03-17T17:43:12.8712044Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8712488Z "size": 385, 2025-03-17T17:43:12.8712912Z "digest": "sha256:cf5152e32c6ac79276c633e666c3349c8158f956a6da86f14461d346f6d96918" 2025-03-17T17:43:12.8713404Z }, 2025-03-17T17:43:12.8713617Z { 2025-03-17T17:43:12.8713967Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8714411Z "size": 103, 2025-03-17T17:43:12.8714835Z "digest": "sha256:d8bb828d31115769319494f4247146d157970f23f819382cc0526bae8b70a563" 2025-03-17T17:43:12.8715322Z }, 2025-03-17T17:43:12.8715535Z { 2025-03-17T17:43:12.8715889Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8716334Z "size": 1473, 2025-03-17T17:43:12.8716774Z "digest": "sha256:b2a7239ab16ceca7826edb321ecc59513f78c50d9668c323fff1f344af8de21d" 2025-03-17T17:43:12.8717290Z }, 2025-03-17T17:43:12.8717504Z { 2025-03-17T17:43:12.8717851Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8718300Z "size": 427727373, 2025-03-17T17:43:12.8718741Z "digest": "sha256:0ce8c7e7a00bcae578f8bb623704e5c3c5672a63bf50c631ee481399d7abc8f8" 2025-03-17T17:43:12.8719253Z }, 2025-03-17T17:43:12.8719466Z { 2025-03-17T17:43:12.8719815Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8720256Z "size": 164, 2025-03-17T17:43:12.8720698Z "digest": "sha256:8a958ba06a54de93939ed075bfaea79c1d0ad11f04a3a23d498ebc9027e48f60" 2025-03-17T17:43:12.8721201Z }, 2025-03-17T17:43:12.8721413Z { 2025-03-17T17:43:12.8721763Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8722398Z "size": 423, 2025-03-17T17:43:12.8722842Z "digest": "sha256:4faaa33c53bd714b30d10653f0a8c577f7a6379bc7cd11e91e7281c50312f3a5" 2025-03-17T17:43:12.8723352Z }, 2025-03-17T17:43:12.8723570Z { 2025-03-17T17:43:12.8723924Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8724372Z "size": 32, 2025-03-17T17:43:12.8724809Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8725304Z }, 2025-03-17T17:43:12.8725516Z { 2025-03-17T17:43:12.8725865Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8726309Z "size": 111, 2025-03-17T17:43:12.8726758Z "digest": "sha256:05dec121f1a5fbdf5ffa4917b34c00981b5ecb78c9ea0cffc4ced4200d62c84f" 2025-03-17T17:43:12.8727269Z }, 2025-03-17T17:43:12.8727482Z { 2025-03-17T17:43:12.8727831Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8728277Z "size": 474, 2025-03-17T17:43:12.8728717Z "digest": "sha256:2bb9b1c0bd33c33a68acb9f81441368d4c93fd95095fe4221a8f3cbbba14e24b" 2025-03-17T17:43:12.8729224Z }, 2025-03-17T17:43:12.8729440Z { 2025-03-17T17:43:12.8729788Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8730230Z "size": 32, 2025-03-17T17:43:12.8730664Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8731172Z }, 2025-03-17T17:43:12.8731371Z { 2025-03-17T17:43:12.8731718Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8732157Z "size": 112, 2025-03-17T17:43:12.8732601Z "digest": "sha256:9c859a607fd769ebdcdefb2c4f9adcf2b49c6f2c1d8e8edaf22068a6c5df27b5" 2025-03-17T17:43:12.8733191Z }, 2025-03-17T17:43:12.8733408Z { 2025-03-17T17:43:12.8733758Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8734203Z "size": 566, 2025-03-17T17:43:12.8734639Z "digest": "sha256:ce828f4c87519a77ef8415c6f8ea601f1b1711d128b41da9000d981f4472b1a0" 2025-03-17T17:43:12.8735145Z }, 2025-03-17T17:43:12.8735359Z { 2025-03-17T17:43:12.8735711Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8736156Z "size": 43147974, 2025-03-17T17:43:12.8736612Z "digest": "sha256:e9b9ea34fa7dd07a607dfaa83ab31ba1be2f3304186d213d880a44b6aa09dc64" 2025-03-17T17:43:12.8737586Z }, 2025-03-17T17:43:12.8737804Z { 2025-03-17T17:43:12.8738144Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8738590Z "size": 106, 2025-03-17T17:43:12.8739026Z "digest": "sha256:7d26eaf3bcc6419f69417ffe38f080851e8c8356a74b6068a8131db7b1cf59c8" 2025-03-17T17:43:12.8739540Z }, 2025-03-17T17:43:12.8739764Z { 2025-03-17T17:43:12.8740119Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8740567Z "size": 346, 2025-03-17T17:43:12.8740999Z "digest": "sha256:2016aa41a00498b49df9a52d3812b62ec5c099d01218292756310254dee645ec" 2025-03-17T17:43:12.8741501Z }, 2025-03-17T17:43:12.8741719Z { 2025-03-17T17:43:12.8742077Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8742530Z "size": 32, 2025-03-17T17:43:12.8742970Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8743483Z }, 2025-03-17T17:43:12.8743703Z { 2025-03-17T17:43:12.8744057Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8744506Z "size": 106, 2025-03-17T17:43:12.8744933Z "digest": "sha256:8f8e06ffc4245ddd7642adb49bae238afede80732909295248a9ed8682dd9b3b" 2025-03-17T17:43:12.8745439Z }, 2025-03-17T17:43:12.8745652Z { 2025-03-17T17:43:12.8746005Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8746529Z "size": 425, 2025-03-17T17:43:12.8746962Z "digest": "sha256:11a7893112d9d108e90e74b85f0f21f661e8061cc8ccff211336991ca7ad679a" 2025-03-17T17:43:12.8747595Z }, 2025-03-17T17:43:12.8747810Z { 2025-03-17T17:43:12.8748156Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8748600Z "size": 20262185, 2025-03-17T17:43:12.8749058Z "digest": "sha256:efaaabeb52d48f98d77adf370bf56d117d1adf6cb8e797006444a7cf860d5ba3" 2025-03-17T17:43:12.8749570Z }, 2025-03-17T17:43:12.8749782Z { 2025-03-17T17:43:12.8750129Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8750572Z "size": 108, 2025-03-17T17:43:12.8750998Z "digest": "sha256:6709e381c2159610acd9973335ae630fac59418baa0005ffb75909f24db26ae7" 2025-03-17T17:43:12.8751491Z }, 2025-03-17T17:43:12.8751696Z { 2025-03-17T17:43:12.8752051Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8752498Z "size": 644, 2025-03-17T17:43:12.8752941Z "digest": "sha256:fdad8b1754ceaf8c911ce73ceca83f1848979bcd04b213bcff08055dcb0cccd6" 2025-03-17T17:43:12.8753460Z }, 2025-03-17T17:43:12.8753675Z { 2025-03-17T17:43:12.8754023Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8754466Z "size": 700, 2025-03-17T17:43:12.8754911Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8755421Z }, 2025-03-17T17:43:12.8755633Z { 2025-03-17T17:43:12.8755979Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8756425Z "size": 141, 2025-03-17T17:43:12.8756857Z "digest": "sha256:5be9bd12f5ac646c934a10633f4823dc57e1e28c7623c35a23e13d5b0213b59f" 2025-03-17T17:43:12.8757359Z }, 2025-03-17T17:43:12.8757561Z { 2025-03-17T17:43:12.8757913Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8758471Z "size": 137, 2025-03-17T17:43:12.8758921Z "digest": "sha256:489c21ceb4a2ebf311769c433d0f7bea1a1e62cbb1e2764d81ae56e37ade8096" 2025-03-17T17:43:12.8759437Z }, 2025-03-17T17:43:12.8759659Z { 2025-03-17T17:43:12.8760012Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8760460Z "size": 32, 2025-03-17T17:43:12.8760908Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8761414Z }, 2025-03-17T17:43:12.8761627Z { 2025-03-17T17:43:12.8761977Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8762424Z "size": 195, 2025-03-17T17:43:12.8762855Z "digest": "sha256:81f8b136cee2285e67494e51baeb9ae8803468d80d1314c4773b56d36aac45f3" 2025-03-17T17:43:12.8763360Z }, 2025-03-17T17:43:12.8763576Z { 2025-03-17T17:43:12.8763929Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8764361Z "size": 1401, 2025-03-17T17:43:12.8764798Z "digest": "sha256:60a243865e5360bdbc737efdb72de7843d155949e6d901ab8c26e8066bf4f362" 2025-03-17T17:43:12.8765292Z }, 2025-03-17T17:43:12.8765503Z { 2025-03-17T17:43:12.8765857Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8766301Z "size": 700, 2025-03-17T17:43:12.8766743Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8767255Z }, 2025-03-17T17:43:12.8767465Z { 2025-03-17T17:43:12.8767816Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8768264Z "size": 140, 2025-03-17T17:43:12.8768695Z "digest": "sha256:b182a0bac1a1896049b91601060d4eee39b0af1f7afc71b9d973e576fd452998" 2025-03-17T17:43:12.8769199Z }, 2025-03-17T17:43:12.8769414Z { 2025-03-17T17:43:12.8769768Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8770214Z "size": 32, 2025-03-17T17:43:12.8770641Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8771144Z }, 2025-03-17T17:43:12.8771355Z { 2025-03-17T17:43:12.8771706Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8772221Z "size": 155, 2025-03-17T17:43:12.8772646Z "digest": "sha256:95fa35d609c893d1bb791435576ce0524c281eb1079091811c3ec520497fe2ba" 2025-03-17T17:43:12.8773139Z }, 2025-03-17T17:43:12.8773357Z { 2025-03-17T17:43:12.8773713Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8774160Z "size": 1401, 2025-03-17T17:43:12.8774599Z "digest": "sha256:60a243865e5360bdbc737efdb72de7843d155949e6d901ab8c26e8066bf4f362" 2025-03-17T17:43:12.8775102Z }, 2025-03-17T17:43:12.8775321Z { 2025-03-17T17:43:12.8775675Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8776121Z "size": 700, 2025-03-17T17:43:12.8776572Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8777098Z }, 2025-03-17T17:43:12.8777305Z { 2025-03-17T17:43:12.8777658Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8778118Z "size": 140, 2025-03-17T17:43:12.8778565Z "digest": "sha256:e969307a15f36044bdba1fef09aa4bf705df39bf8f409eb7491b46bfad3e723a" 2025-03-17T17:43:12.8779077Z }, 2025-03-17T17:43:12.8779292Z { 2025-03-17T17:43:12.8779647Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8780095Z "size": 32, 2025-03-17T17:43:12.8780535Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8781043Z }, 2025-03-17T17:43:12.8781262Z { 2025-03-17T17:43:12.8781612Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8782055Z "size": 160, 2025-03-17T17:43:12.8782490Z "digest": "sha256:062e558f85952c88d7ef1a4e2dc58e458b640ccc4622fc70e141beaac40dc3bb" 2025-03-17T17:43:12.8782996Z }, 2025-03-17T17:43:12.8783275Z { 2025-03-17T17:43:12.8783614Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8784058Z "size": 765, 2025-03-17T17:43:12.8784487Z "digest": "sha256:405a0f898da6946734f16d703206a431f303a8a603af5b5e3e5e29470f63bde2" 2025-03-17T17:43:12.8784980Z }, 2025-03-17T17:43:12.8785192Z { 2025-03-17T17:43:12.8785540Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8785990Z "size": 700, 2025-03-17T17:43:12.8805861Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8806409Z }, 2025-03-17T17:43:12.8806634Z { 2025-03-17T17:43:12.8806992Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8807446Z "size": 140, 2025-03-17T17:43:12.8807889Z "digest": "sha256:57c016712ec8a3edacde25678ee1ed7da52e1fd31f3a71fb3eb9e78461fb7d4c" 2025-03-17T17:43:12.8808405Z }, 2025-03-17T17:43:12.8808619Z { 2025-03-17T17:43:12.8808989Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8809437Z "size": 32, 2025-03-17T17:43:12.8809872Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8810389Z }, 2025-03-17T17:43:12.8810604Z { 2025-03-17T17:43:12.8810953Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8811400Z "size": 160, 2025-03-17T17:43:12.8811843Z "digest": "sha256:d6e77e8938d445b4b0ddff3bda131e072ac7a14cd1d2f5ad8b5a50e5afcb6a0b" 2025-03-17T17:43:12.8812339Z }, 2025-03-17T17:43:12.8812550Z { 2025-03-17T17:43:12.8812901Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8813341Z "size": 908, 2025-03-17T17:43:12.8813765Z "digest": "sha256:a74edb648360711c5058abdb467a982e52d4704567c752ff2962efa21c55f6e8" 2025-03-17T17:43:12.8814259Z }, 2025-03-17T17:43:12.8814469Z { 2025-03-17T17:43:12.8814822Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8815264Z "size": 700, 2025-03-17T17:43:12.8815708Z "digest": "sha256:cf3a29e6adeebc347cef904e0cc0335ace07afb751b63eb06485ddd50fc93cc1" 2025-03-17T17:43:12.8816341Z }, 2025-03-17T17:43:12.8816556Z { 2025-03-17T17:43:12.8816905Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8817356Z "size": 135, 2025-03-17T17:43:12.8817795Z "digest": "sha256:3449b575a6aec9b0136df98ecdb3745a3c82b0ee075520aa542f34de149eb4d7" 2025-03-17T17:43:12.8818302Z }, 2025-03-17T17:43:12.8818507Z { 2025-03-17T17:43:12.8818855Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8819295Z "size": 32, 2025-03-17T17:43:12.8819726Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8820228Z }, 2025-03-17T17:43:12.8820436Z { 2025-03-17T17:43:12.8820780Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8821226Z "size": 157, 2025-03-17T17:43:12.8821656Z "digest": "sha256:0c4599265d5e0227bb3f5b96af1775bcb0827daa0a6e8d2cf40c3e7b3ab84196" 2025-03-17T17:43:12.8822155Z }, 2025-03-17T17:43:12.8822372Z { 2025-03-17T17:43:12.8822719Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8823171Z "size": 1484, 2025-03-17T17:43:12.8823603Z "digest": "sha256:8393bf48bd09f6b89f1f2dff32b879116277b46d367f9e63c106a93ed4f7c0bb" 2025-03-17T17:43:12.8824108Z }, 2025-03-17T17:43:12.8824322Z { 2025-03-17T17:43:12.8824661Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8825111Z "size": 32, 2025-03-17T17:43:12.8825547Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8826059Z }, 2025-03-17T17:43:12.8826270Z { 2025-03-17T17:43:12.8826728Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8827177Z "size": 136, 2025-03-17T17:43:12.8827698Z "digest": "sha256:a963cfdf0eeae5495b88e6f3a76756a125738bdb204c699d0811dda749498776" 2025-03-17T17:43:12.8828205Z }, 2025-03-17T17:43:12.8828419Z { 2025-03-17T17:43:12.8828772Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8829226Z "size": 381, 2025-03-17T17:43:12.8829664Z "digest": "sha256:3cca2552aa9c5784d16c25b4e22b83845d9eadd800d87be79ce80627deeb335a" 2025-03-17T17:43:12.8830172Z }, 2025-03-17T17:43:12.8830386Z { 2025-03-17T17:43:12.8830734Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8831176Z "size": 32, 2025-03-17T17:43:12.8831598Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8832101Z }, 2025-03-17T17:43:12.8832313Z { 2025-03-17T17:43:12.8832661Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8833100Z "size": 104, 2025-03-17T17:43:12.8833540Z "digest": "sha256:0813babc0471dba8e1aabb8a12eb606509626c27950a25bd0cddddb65af141b4" 2025-03-17T17:43:12.8834054Z }, 2025-03-17T17:43:12.8834272Z { 2025-03-17T17:43:12.8834624Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8835074Z "size": 1898, 2025-03-17T17:43:12.8835507Z "digest": "sha256:3e77e9716ba620bcc1682e9108d47f5018fb31bd4c2b4474121ef894e7594bc7" 2025-03-17T17:43:12.8836002Z }, 2025-03-17T17:43:12.8836211Z { 2025-03-17T17:43:12.8836558Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8837228Z "size": 234788722, 2025-03-17T17:43:12.8837692Z "digest": "sha256:0d19b4243bcec6c0862af7eb9e13aec6fbbdc56bb43c01f29ed6393fc1055c4f" 2025-03-17T17:43:12.8838192Z }, 2025-03-17T17:43:12.8838410Z { 2025-03-17T17:43:12.8838763Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8839211Z "size": 106, 2025-03-17T17:43:12.8839661Z "digest": "sha256:bf1c8c8ee4ee626fecef68e2293f85c83b32d41fe1e7adc040502440d5e4fde2" 2025-03-17T17:43:12.8840171Z }, 2025-03-17T17:43:12.8840385Z { 2025-03-17T17:43:12.8840734Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8841326Z "size": 165, 2025-03-17T17:43:12.8841761Z "digest": "sha256:d5e6ad68da49819c36793e2ee2261d94c3da32441910b30c959f2fbb88389b13" 2025-03-17T17:43:12.8842267Z }, 2025-03-17T17:43:12.8842487Z { 2025-03-17T17:43:12.8842838Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8843279Z "size": 7943, 2025-03-17T17:43:12.8843714Z "digest": "sha256:66d96199d00f91b74d44a3ee655cecfce5ffc53406bb9a5c560b9c3005ae0e43" 2025-03-17T17:43:12.8844216Z }, 2025-03-17T17:43:12.8844416Z { 2025-03-17T17:43:12.8844763Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8845206Z "size": 8069, 2025-03-17T17:43:12.8845636Z "digest": "sha256:464d0925da53bba45578418c9183769bb8ccd4b91c51c79295b377e26a24c10b" 2025-03-17T17:43:12.8846134Z }, 2025-03-17T17:43:12.8846352Z { 2025-03-17T17:43:12.8846700Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8847143Z "size": 304, 2025-03-17T17:43:12.8847593Z "digest": "sha256:28c8ceadedf60f63acfbe03628ece2da346edbb99eae1fb66194d3d624549766" 2025-03-17T17:43:12.8848106Z }, 2025-03-17T17:43:12.8848321Z { 2025-03-17T17:43:12.8848670Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8849111Z "size": 32, 2025-03-17T17:43:12.8849543Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8850053Z }, 2025-03-17T17:43:12.8850263Z { 2025-03-17T17:43:12.8850601Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8851040Z "size": 108, 2025-03-17T17:43:12.8851478Z "digest": "sha256:9283c1297e780efbe11483a0fe2496c23f8e7ffeca2406dc7acdff1b7f990c42" 2025-03-17T17:43:12.8851987Z }, 2025-03-17T17:43:12.8852205Z { 2025-03-17T17:43:12.8852650Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8853101Z "size": 54145659, 2025-03-17T17:43:12.8853543Z "digest": "sha256:e267976dd03c40f0cbffb628555434d0568da80169658a18896ee74051d47376" 2025-03-17T17:43:12.8854040Z }, 2025-03-17T17:43:12.8854255Z { 2025-03-17T17:43:12.8854603Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-03-17T17:43:12.8855044Z "size": 32, 2025-03-17T17:43:12.8855477Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-03-17T17:43:12.8855979Z } 2025-03-17T17:43:12.8856183Z ] 2025-03-17T17:43:12.8856392Z } 2025-03-17T17:43:12.8856616Z + exit 0 2025-03-17T17:43:12.8895136Z ##[group]Run set -eux 2025-03-17T17:43:12.8895429Z set -eux 2025-03-17T17:43:12.8896356Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin 2025-03-17T17:43:12.8903001Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:12.8903397Z env: 2025-03-17T17:43:12.8903633Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:12.8903924Z ##[endgroup] 2025-03-17T17:43:12.8929184Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-03-17T17:43:12.8930174Z + jq --raw-output .SecretString 2025-03-17T17:43:12.8930788Z + jq -r .docker_hub_readonly_token 2025-03-17T17:43:12.8932340Z + docker login --username pytorchbot --password-stdin 2025-03-17T17:43:13.4579023Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-03-17T17:43:13.4579714Z Configure a credential helper to remove this warning. See 2025-03-17T17:43:13.4580557Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-03-17T17:43:13.4580989Z 2025-03-17T17:43:13.4581099Z Login Succeeded 2025-03-17T17:43:13.4668777Z ##[group]Run tag=${ECR_DOCKER_IMAGE##*/} 2025-03-17T17:43:13.4669188Z tag=${ECR_DOCKER_IMAGE##*/} 2025-03-17T17:43:13.4669612Z echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" 2025-03-17T17:43:13.4675524Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:13.4676091Z env: 2025-03-17T17:43:13.4676335Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:13.4677090Z ECR_DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.4677871Z ##[endgroup] 2025-03-17T17:43:13.4702699Z docker pull ghcr.io/pytorch/ci-image:pytorch-linux-focal-py3.13-clang10-70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.4758249Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2025-03-17T17:43:13.4758724Z with: 2025-03-17T17:43:13.4759416Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.4760296Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:13.4760714Z env: 2025-03-17T17:43:13.4760949Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:13.4761234Z ##[endgroup] 2025-03-17T17:43:13.4787381Z ##[group]Run set -x 2025-03-17T17:43:13.4787680Z set -x 2025-03-17T17:43:13.4787938Z set +e 2025-03-17T17:43:13.4788185Z  2025-03-17T17:43:13.4788424Z login() { 2025-03-17T17:43:13.4788936Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-03-17T17:43:13.4789493Z } 2025-03-17T17:43:13.4789722Z  2025-03-17T17:43:13.4789999Z retry () { 2025-03-17T17:43:13.4790295Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-03-17T17:43:13.4790639Z } 2025-03-17T17:43:13.4790872Z  2025-03-17T17:43:13.4791132Z retry login "${DOCKER_REGISTRY}" 2025-03-17T17:43:13.4791470Z  2025-03-17T17:43:13.4791697Z set -e 2025-03-17T17:43:13.4792055Z # ignore output since only exit code is used for conditional 2025-03-17T17:43:13.4792585Z # only pull docker image if it's not available locally 2025-03-17T17:43:13.4793179Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-03-17T17:43:13.4793723Z  retry docker pull "${DOCKER_IMAGE}" 2025-03-17T17:43:13.4794075Z fi 2025-03-17T17:43:13.4799214Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:43:13.4799612Z env: 2025-03-17T17:43:13.4799855Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:43:13.4800607Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.4801480Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:13.4801897Z ##[endgroup] 2025-03-17T17:43:13.4822694Z + set +e 2025-03-17T17:43:13.4823308Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:13.4823805Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:13.4826079Z + aws ecr get-login-password --region us-east-1 2025-03-17T17:43:13.4827418Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-03-17T17:43:13.9864411Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-03-17T17:43:13.9865164Z Configure a credential helper to remove this warning. See 2025-03-17T17:43:13.9866112Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-03-17T17:43:13.9866797Z 2025-03-17T17:43:13.9866946Z Login Succeeded 2025-03-17T17:43:13.9875443Z + set -e 2025-03-17T17:43:13.9876259Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.9977780Z + retry docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:13.9979032Z + docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:43:14.2467854Z 70252cb1aa0d6173d24140841afd02bc363684c5: Pulling from pytorch/pytorch-linux-focal-py3.13-clang10 2025-03-17T17:43:14.2469145Z 86e5016c2693: Pulling fs layer 2025-03-17T17:43:14.2469747Z 9fca0f71e106: Pulling fs layer 2025-03-17T17:43:14.2470241Z 48fc79ac51c7: Pulling fs layer 2025-03-17T17:43:14.2470717Z 1502f91aae28: Pulling fs layer 2025-03-17T17:43:14.2471216Z 122d0dd9d0af: Pulling fs layer 2025-03-17T17:43:14.2472010Z 7937e7d835eb: Pulling fs layer 2025-03-17T17:43:14.2472531Z 51c49d389de8: Pulling fs layer 2025-03-17T17:43:14.2473001Z 0b6a0950cc92: Pulling fs layer 2025-03-17T17:43:14.2473451Z a7271d4daa69: Pulling fs layer 2025-03-17T17:43:14.2474089Z 58944619eebf: Pulling fs layer 2025-03-17T17:43:14.2474602Z 36598b9d00be: Pulling fs layer 2025-03-17T17:43:14.2475112Z cf3a29e6adee: Pulling fs layer 2025-03-17T17:43:14.2475591Z 3d159d7e02e0: Pulling fs layer 2025-03-17T17:43:14.2476133Z 708acdd578e7: Pulling fs layer 2025-03-17T17:43:14.2476577Z 4f4fb700ef54: Pulling fs layer 2025-03-17T17:43:14.2476878Z 51c49d389de8: Waiting 2025-03-17T17:43:14.2477223Z 122d0dd9d0af: Waiting 2025-03-17T17:43:14.2477501Z ee561d63497b: Pulling fs layer 2025-03-17T17:43:14.2477834Z 7937e7d835eb: Waiting 2025-03-17T17:43:14.2478178Z cf3a29e6adee: Waiting 2025-03-17T17:43:14.2478443Z 3d159d7e02e0: Waiting 2025-03-17T17:43:14.2478782Z 0b6a0950cc92: Waiting 2025-03-17T17:43:14.2479035Z 708acdd578e7: Waiting 2025-03-17T17:43:14.2479322Z a7271d4daa69: Waiting 2025-03-17T17:43:14.2479635Z 4f4fb700ef54: Waiting 2025-03-17T17:43:14.2479893Z 58944619eebf: Waiting 2025-03-17T17:43:14.2480204Z 36598b9d00be: Waiting 2025-03-17T17:43:14.2480475Z 1502f91aae28: Waiting 2025-03-17T17:43:14.2480745Z 6869a34d3085: Pulling fs layer 2025-03-17T17:43:14.2481109Z 750d0a5ea8af: Pulling fs layer 2025-03-17T17:43:14.2481405Z ee561d63497b: Waiting 2025-03-17T17:43:14.2481719Z 643ddef2c794: Pulling fs layer 2025-03-17T17:43:14.2482041Z be4499a3cc2e: Pulling fs layer 2025-03-17T17:43:14.2482351Z 5293b32ec021: Pulling fs layer 2025-03-17T17:43:14.2482719Z 107d9d8d8628: Pulling fs layer 2025-03-17T17:43:14.2483019Z 643ddef2c794: Waiting 2025-03-17T17:43:14.2483336Z 6869a34d3085: Waiting 2025-03-17T17:43:14.2483602Z 750d0a5ea8af: Waiting 2025-03-17T17:43:14.2483870Z be4499a3cc2e: Waiting 2025-03-17T17:43:14.2484145Z 8d95eba27708: Pulling fs layer 2025-03-17T17:43:14.2484458Z 2c2d5cb4739a: Pulling fs layer 2025-03-17T17:43:14.2484766Z 97b06d62430a: Pulling fs layer 2025-03-17T17:43:14.2485077Z 3308da525a94: Pulling fs layer 2025-03-17T17:43:14.2485390Z 1e826e5aad15: Pulling fs layer 2025-03-17T17:43:14.2485700Z cf5152e32c6a: Pulling fs layer 2025-03-17T17:43:14.2486011Z d8bb828d3111: Pulling fs layer 2025-03-17T17:43:14.2486294Z 3308da525a94: Waiting 2025-03-17T17:43:14.2486559Z 107d9d8d8628: Waiting 2025-03-17T17:43:14.2486837Z b2a7239ab16c: Pulling fs layer 2025-03-17T17:43:14.2487134Z 1e826e5aad15: Waiting 2025-03-17T17:43:14.2487396Z 8d95eba27708: Waiting 2025-03-17T17:43:14.2487679Z 0ce8c7e7a00b: Pulling fs layer 2025-03-17T17:43:14.2487972Z cf5152e32c6a: Waiting 2025-03-17T17:43:14.2488240Z 2c2d5cb4739a: Waiting 2025-03-17T17:43:14.2488515Z 8a958ba06a54: Pulling fs layer 2025-03-17T17:43:14.2488806Z d8bb828d3111: Waiting 2025-03-17T17:43:14.2489080Z 4faaa33c53bd: Pulling fs layer 2025-03-17T17:43:14.2489372Z b2a7239ab16c: Waiting 2025-03-17T17:43:14.2489631Z 0ce8c7e7a00b: Waiting 2025-03-17T17:43:14.2489903Z 05dec121f1a5: Pulling fs layer 2025-03-17T17:43:14.2490539Z 8a958ba06a54: Waiting 2025-03-17T17:43:14.2490806Z 2bb9b1c0bd33: Pulling fs layer 2025-03-17T17:43:14.2491103Z 05dec121f1a5: Waiting 2025-03-17T17:43:14.2491376Z 9c859a607fd7: Pulling fs layer 2025-03-17T17:43:14.2491680Z ce828f4c8751: Pulling fs layer 2025-03-17T17:43:14.2491972Z 2bb9b1c0bd33: Waiting 2025-03-17T17:43:14.2492404Z e9b9ea34fa7d: Pulling fs layer 2025-03-17T17:43:14.2492908Z 7d26eaf3bcc6: Pulling fs layer 2025-03-17T17:43:14.2493647Z 9c859a607fd7: Waiting 2025-03-17T17:43:14.2494077Z 2016aa41a004: Pulling fs layer 2025-03-17T17:43:14.2494561Z ce828f4c8751: Waiting 2025-03-17T17:43:14.2494826Z e9b9ea34fa7d: Waiting 2025-03-17T17:43:14.2495089Z 7d26eaf3bcc6: Waiting 2025-03-17T17:43:14.2495384Z 8f8e06ffc424: Pulling fs layer 2025-03-17T17:43:14.2495687Z 11a7893112d9: Pulling fs layer 2025-03-17T17:43:14.2496087Z efaaabeb52d4: Pulling fs layer 2025-03-17T17:43:14.2496397Z 6709e381c215: Pulling fs layer 2025-03-17T17:43:14.2496694Z 11a7893112d9: Waiting 2025-03-17T17:43:14.2497101Z fdad8b1754ce: Pulling fs layer 2025-03-17T17:43:14.2497411Z efaaabeb52d4: Waiting 2025-03-17T17:43:14.2497698Z 5be9bd12f5ac: Pulling fs layer 2025-03-17T17:43:14.2497998Z 6709e381c215: Waiting 2025-03-17T17:43:14.2498273Z 489c21ceb4a2: Pulling fs layer 2025-03-17T17:43:14.2498578Z 81f8b136cee2: Pulling fs layer 2025-03-17T17:43:14.2498879Z 60a243865e53: Pulling fs layer 2025-03-17T17:43:14.2499170Z 81f8b136cee2: Waiting 2025-03-17T17:43:14.2499426Z 60a243865e53: Waiting 2025-03-17T17:43:14.2499698Z b182a0bac1a1: Pulling fs layer 2025-03-17T17:43:14.2500012Z 95fa35d609c8: Pulling fs layer 2025-03-17T17:43:14.2500302Z b182a0bac1a1: Waiting 2025-03-17T17:43:14.2500571Z e969307a15f3: Pulling fs layer 2025-03-17T17:43:14.2500859Z 95fa35d609c8: Waiting 2025-03-17T17:43:14.2501223Z 062e558f8595: Pulling fs layer 2025-03-17T17:43:14.2501514Z 405a0f898da6: Pulling fs layer 2025-03-17T17:43:14.2501807Z e969307a15f3: Waiting 2025-03-17T17:43:14.2502201Z 57c016712ec8: Pulling fs layer 2025-03-17T17:43:14.2502576Z d6e77e8938d4: Pulling fs layer 2025-03-17T17:43:14.2502879Z a74edb648360: Pulling fs layer 2025-03-17T17:43:14.2503180Z 3449b575a6ae: Pulling fs layer 2025-03-17T17:43:14.2503483Z 0c4599265d5e: Pulling fs layer 2025-03-17T17:43:14.2503773Z 405a0f898da6: Waiting 2025-03-17T17:43:14.2504039Z 8393bf48bd09: Pulling fs layer 2025-03-17T17:43:14.2504325Z 57c016712ec8: Waiting 2025-03-17T17:43:14.2504596Z a963cfdf0eea: Pulling fs layer 2025-03-17T17:43:14.2504903Z 3cca2552aa9c: Pulling fs layer 2025-03-17T17:43:14.2505195Z 3449b575a6ae: Waiting 2025-03-17T17:43:14.2505471Z 0813babc0471: Pulling fs layer 2025-03-17T17:43:14.2505764Z d6e77e8938d4: Waiting 2025-03-17T17:43:14.2506022Z 3e77e9716ba6: Pulling fs layer 2025-03-17T17:43:14.2506315Z a74edb648360: Waiting 2025-03-17T17:43:14.2506645Z 062e558f8595: Waiting 2025-03-17T17:43:14.2506915Z 0d19b4243bce: Pulling fs layer 2025-03-17T17:43:14.2507225Z bf1c8c8ee4ee: Pulling fs layer 2025-03-17T17:43:14.2507596Z 8393bf48bd09: Waiting 2025-03-17T17:43:14.2507877Z d5e6ad68da49: Pulling fs layer 2025-03-17T17:43:14.2508298Z 0c4599265d5e: Waiting 2025-03-17T17:43:14.2508605Z 66d96199d00f: Pulling fs layer 2025-03-17T17:43:14.2508962Z a963cfdf0eea: Waiting 2025-03-17T17:43:14.2509234Z 464d0925da53: Pulling fs layer 2025-03-17T17:43:14.2509522Z 3e77e9716ba6: Waiting 2025-03-17T17:43:14.2509781Z 0d19b4243bce: Waiting 2025-03-17T17:43:14.2510082Z bf1c8c8ee4ee: Waiting 2025-03-17T17:43:14.2510384Z 3cca2552aa9c: Waiting 2025-03-17T17:43:14.2510647Z 28c8ceadedf6: Pulling fs layer 2025-03-17T17:43:14.2510982Z 464d0925da53: Waiting 2025-03-17T17:43:14.2511247Z 0813babc0471: Waiting 2025-03-17T17:43:14.2511520Z 9283c1297e78: Pulling fs layer 2025-03-17T17:43:14.2511888Z e267976dd03c: Pulling fs layer 2025-03-17T17:43:14.2512179Z 9283c1297e78: Waiting 2025-03-17T17:43:14.2512493Z e267976dd03c: Waiting 2025-03-17T17:43:14.2512753Z 66d96199d00f: Waiting 2025-03-17T17:43:14.3244745Z 9fca0f71e106: Verifying Checksum 2025-03-17T17:43:14.3245337Z 9fca0f71e106: Download complete 2025-03-17T17:43:14.4160751Z 1502f91aae28: Verifying Checksum 2025-03-17T17:43:14.4161181Z 1502f91aae28: Download complete 2025-03-17T17:43:14.6011265Z 86e5016c2693: Verifying Checksum 2025-03-17T17:43:14.6011761Z 86e5016c2693: Download complete 2025-03-17T17:43:14.6941587Z 7937e7d835eb: Download complete 2025-03-17T17:43:14.7811589Z 51c49d389de8: Download complete 2025-03-17T17:43:14.8791830Z 0b6a0950cc92: Verifying Checksum 2025-03-17T17:43:14.8792502Z 0b6a0950cc92: Download complete 2025-03-17T17:43:14.9724093Z a7271d4daa69: Verifying Checksum 2025-03-17T17:43:14.9725004Z a7271d4daa69: Download complete 2025-03-17T17:43:15.0486213Z 58944619eebf: Verifying Checksum 2025-03-17T17:43:15.0486821Z 58944619eebf: Download complete 2025-03-17T17:43:15.1563084Z 36598b9d00be: Verifying Checksum 2025-03-17T17:43:15.1563715Z 36598b9d00be: Download complete 2025-03-17T17:43:15.2456483Z cf3a29e6adee: Download complete 2025-03-17T17:43:15.2611281Z 122d0dd9d0af: Verifying Checksum 2025-03-17T17:43:15.2611773Z 122d0dd9d0af: Download complete 2025-03-17T17:43:15.3275688Z 3d159d7e02e0: Verifying Checksum 2025-03-17T17:43:15.3276355Z 3d159d7e02e0: Download complete 2025-03-17T17:43:15.3444826Z 4f4fb700ef54: Verifying Checksum 2025-03-17T17:43:15.3445394Z 4f4fb700ef54: Download complete 2025-03-17T17:43:15.4366227Z ee561d63497b: Download complete 2025-03-17T17:43:15.4471366Z 86e5016c2693: Pull complete 2025-03-17T17:43:15.4697108Z 9fca0f71e106: Pull complete 2025-03-17T17:43:15.5249349Z 6869a34d3085: Verifying Checksum 2025-03-17T17:43:15.5249929Z 6869a34d3085: Download complete 2025-03-17T17:43:15.6126579Z 750d0a5ea8af: Verifying Checksum 2025-03-17T17:43:15.6127131Z 750d0a5ea8af: Download complete 2025-03-17T17:43:15.7190973Z 643ddef2c794: Verifying Checksum 2025-03-17T17:43:15.7191543Z 643ddef2c794: Download complete 2025-03-17T17:43:15.7822300Z be4499a3cc2e: Verifying Checksum 2025-03-17T17:43:15.7822727Z be4499a3cc2e: Download complete 2025-03-17T17:43:15.8764938Z 5293b32ec021: Download complete 2025-03-17T17:43:15.9758121Z 107d9d8d8628: Verifying Checksum 2025-03-17T17:43:15.9758548Z 107d9d8d8628: Download complete 2025-03-17T17:43:16.0717625Z 8d95eba27708: Verifying Checksum 2025-03-17T17:43:16.0718150Z 8d95eba27708: Download complete 2025-03-17T17:43:16.1560892Z 2c2d5cb4739a: Verifying Checksum 2025-03-17T17:43:16.1561755Z 2c2d5cb4739a: Download complete 2025-03-17T17:43:17.4396692Z 97b06d62430a: Verifying Checksum 2025-03-17T17:43:17.4397141Z 97b06d62430a: Download complete 2025-03-17T17:43:17.5730420Z 48fc79ac51c7: Verifying Checksum 2025-03-17T17:43:17.5730819Z 48fc79ac51c7: Download complete 2025-03-17T17:43:17.6227216Z 3308da525a94: Download complete 2025-03-17T17:43:17.6641555Z 1e826e5aad15: Verifying Checksum 2025-03-17T17:43:17.6642256Z 1e826e5aad15: Download complete 2025-03-17T17:43:17.7049341Z cf5152e32c6a: Verifying Checksum 2025-03-17T17:43:17.7049970Z cf5152e32c6a: Download complete 2025-03-17T17:43:17.7360127Z d8bb828d3111: Verifying Checksum 2025-03-17T17:43:17.7360725Z d8bb828d3111: Download complete 2025-03-17T17:43:17.7715923Z b2a7239ab16c: Verifying Checksum 2025-03-17T17:43:17.7716575Z b2a7239ab16c: Download complete 2025-03-17T17:43:17.8670922Z 8a958ba06a54: Verifying Checksum 2025-03-17T17:43:17.8671409Z 8a958ba06a54: Download complete 2025-03-17T17:43:17.9437884Z 4faaa33c53bd: Verifying Checksum 2025-03-17T17:43:17.9438387Z 4faaa33c53bd: Download complete 2025-03-17T17:43:18.0761990Z 05dec121f1a5: Verifying Checksum 2025-03-17T17:43:18.0762502Z 05dec121f1a5: Download complete 2025-03-17T17:43:18.1756415Z 2bb9b1c0bd33: Verifying Checksum 2025-03-17T17:43:18.1757148Z 2bb9b1c0bd33: Download complete 2025-03-17T17:43:18.2608941Z 9c859a607fd7: Download complete 2025-03-17T17:43:18.3790022Z ce828f4c8751: Verifying Checksum 2025-03-17T17:43:18.3790924Z ce828f4c8751: Download complete 2025-03-17T17:43:18.8828108Z e9b9ea34fa7d: Verifying Checksum 2025-03-17T17:43:18.8828716Z e9b9ea34fa7d: Download complete 2025-03-17T17:43:18.9820848Z 7d26eaf3bcc6: Verifying Checksum 2025-03-17T17:43:18.9821395Z 7d26eaf3bcc6: Download complete 2025-03-17T17:43:19.0905814Z 2016aa41a004: Verifying Checksum 2025-03-17T17:43:19.0906487Z 2016aa41a004: Download complete 2025-03-17T17:43:19.1824568Z 8f8e06ffc424: Verifying Checksum 2025-03-17T17:43:19.1825121Z 8f8e06ffc424: Download complete 2025-03-17T17:43:19.2775548Z 11a7893112d9: Verifying Checksum 2025-03-17T17:43:19.2776158Z 11a7893112d9: Download complete 2025-03-17T17:43:19.5274279Z efaaabeb52d4: Verifying Checksum 2025-03-17T17:43:19.5274842Z efaaabeb52d4: Download complete 2025-03-17T17:43:19.6041378Z 6709e381c215: Verifying Checksum 2025-03-17T17:43:19.6042321Z 6709e381c215: Download complete 2025-03-17T17:43:19.6993941Z fdad8b1754ce: Verifying Checksum 2025-03-17T17:43:19.6994363Z fdad8b1754ce: Download complete 2025-03-17T17:43:19.8258124Z 5be9bd12f5ac: Download complete 2025-03-17T17:43:19.9363468Z 489c21ceb4a2: Verifying Checksum 2025-03-17T17:43:19.9364124Z 489c21ceb4a2: Download complete 2025-03-17T17:43:20.0422666Z 81f8b136cee2: Download complete 2025-03-17T17:43:20.1293727Z 60a243865e53: Verifying Checksum 2025-03-17T17:43:20.1295614Z 60a243865e53: Download complete 2025-03-17T17:43:20.2286869Z b182a0bac1a1: Verifying Checksum 2025-03-17T17:43:20.2287516Z b182a0bac1a1: Download complete 2025-03-17T17:43:20.3195442Z 95fa35d609c8: Verifying Checksum 2025-03-17T17:43:20.3196143Z 95fa35d609c8: Download complete 2025-03-17T17:43:20.3923689Z e969307a15f3: Verifying Checksum 2025-03-17T17:43:20.3924729Z e969307a15f3: Download complete 2025-03-17T17:43:20.4603383Z 062e558f8595: Verifying Checksum 2025-03-17T17:43:20.4604018Z 062e558f8595: Download complete 2025-03-17T17:43:20.5573710Z 405a0f898da6: Verifying Checksum 2025-03-17T17:43:20.5574318Z 405a0f898da6: Download complete 2025-03-17T17:43:20.6640966Z 57c016712ec8: Download complete 2025-03-17T17:43:20.7519624Z d6e77e8938d4: Verifying Checksum 2025-03-17T17:43:20.8219220Z a74edb648360: Download complete 2025-03-17T17:43:20.9367290Z 3449b575a6ae: Verifying Checksum 2025-03-17T17:43:20.9367942Z 3449b575a6ae: Download complete 2025-03-17T17:43:21.0159700Z 0c4599265d5e: Verifying Checksum 2025-03-17T17:43:21.0160197Z 0c4599265d5e: Download complete 2025-03-17T17:43:21.0887974Z 8393bf48bd09: Verifying Checksum 2025-03-17T17:43:21.0888760Z 8393bf48bd09: Download complete 2025-03-17T17:43:21.1634097Z a963cfdf0eea: Download complete 2025-03-17T17:43:21.2294135Z 3cca2552aa9c: Verifying Checksum 2025-03-17T17:43:21.2294616Z 3cca2552aa9c: Download complete 2025-03-17T17:43:21.3853229Z 0813babc0471: Verifying Checksum 2025-03-17T17:43:21.3853876Z 0813babc0471: Download complete 2025-03-17T17:43:21.4690171Z 3e77e9716ba6: Verifying Checksum 2025-03-17T17:43:21.4690794Z 3e77e9716ba6: Download complete 2025-03-17T17:43:22.0542907Z 0ce8c7e7a00b: Verifying Checksum 2025-03-17T17:43:22.1489086Z 0ce8c7e7a00b: Download complete 2025-03-17T17:43:22.1489659Z bf1c8c8ee4ee: Verifying Checksum 2025-03-17T17:43:22.2182194Z bf1c8c8ee4ee: Download complete 2025-03-17T17:43:22.2183132Z d5e6ad68da49: Verifying Checksum 2025-03-17T17:43:22.2183621Z d5e6ad68da49: Download complete 2025-03-17T17:43:22.3259067Z 66d96199d00f: Download complete 2025-03-17T17:43:22.4192270Z 464d0925da53: Download complete 2025-03-17T17:43:22.4898881Z 28c8ceadedf6: Verifying Checksum 2025-03-17T17:43:22.4899327Z 28c8ceadedf6: Download complete 2025-03-17T17:43:22.5701969Z 9283c1297e78: Download complete 2025-03-17T17:43:23.1894377Z e267976dd03c: Verifying Checksum 2025-03-17T17:43:23.1895038Z e267976dd03c: Download complete 2025-03-17T17:43:23.8686010Z 0d19b4243bce: Verifying Checksum 2025-03-17T17:43:23.8686584Z 0d19b4243bce: Download complete 2025-03-17T17:43:27.0437569Z 48fc79ac51c7: Pull complete 2025-03-17T17:43:27.1957916Z 1502f91aae28: Pull complete 2025-03-17T17:43:30.9255768Z 122d0dd9d0af: Pull complete 2025-03-17T17:43:31.1412951Z 7937e7d835eb: Pull complete 2025-03-17T17:43:31.3486940Z 51c49d389de8: Pull complete 2025-03-17T17:43:31.5656594Z 0b6a0950cc92: Pull complete 2025-03-17T17:43:31.7912367Z a7271d4daa69: Pull complete 2025-03-17T17:43:32.0140511Z 58944619eebf: Pull complete 2025-03-17T17:43:32.2301201Z 36598b9d00be: Pull complete 2025-03-17T17:43:32.4254959Z cf3a29e6adee: Pull complete 2025-03-17T17:43:32.6329722Z 3d159d7e02e0: Pull complete 2025-03-17T17:43:43.6834268Z 708acdd578e7: Verifying Checksum 2025-03-17T17:43:43.6834672Z 708acdd578e7: Download complete 2025-03-17T17:44:30.9785152Z 708acdd578e7: Pull complete 2025-03-17T17:44:30.9999792Z 4f4fb700ef54: Pull complete 2025-03-17T17:44:31.0214352Z ee561d63497b: Pull complete 2025-03-17T17:44:31.0423560Z 6869a34d3085: Pull complete 2025-03-17T17:44:31.0637652Z 750d0a5ea8af: Pull complete 2025-03-17T17:44:31.1274963Z 643ddef2c794: Pull complete 2025-03-17T17:44:31.1479257Z be4499a3cc2e: Pull complete 2025-03-17T17:44:31.1686757Z 5293b32ec021: Pull complete 2025-03-17T17:44:31.1895642Z 107d9d8d8628: Pull complete 2025-03-17T17:44:31.2306811Z 8d95eba27708: Pull complete 2025-03-17T17:44:31.2518497Z 2c2d5cb4739a: Pull complete 2025-03-17T17:44:34.2395482Z 97b06d62430a: Pull complete 2025-03-17T17:44:34.3386881Z 3308da525a94: Pull complete 2025-03-17T17:44:34.4024061Z 1e826e5aad15: Pull complete 2025-03-17T17:44:34.4419665Z cf5152e32c6a: Pull complete 2025-03-17T17:44:34.4859663Z d8bb828d3111: Pull complete 2025-03-17T17:44:34.5188773Z b2a7239ab16c: Pull complete 2025-03-17T17:44:45.0015318Z 0ce8c7e7a00b: Pull complete 2025-03-17T17:44:45.0221240Z 8a958ba06a54: Pull complete 2025-03-17T17:44:45.0419828Z 4faaa33c53bd: Pull complete 2025-03-17T17:44:45.0822066Z 05dec121f1a5: Pull complete 2025-03-17T17:44:45.1026511Z 2bb9b1c0bd33: Pull complete 2025-03-17T17:44:45.1437264Z 9c859a607fd7: Pull complete 2025-03-17T17:44:45.1642475Z ce828f4c8751: Pull complete 2025-03-17T17:44:46.7188417Z e9b9ea34fa7d: Pull complete 2025-03-17T17:44:46.7404832Z 7d26eaf3bcc6: Pull complete 2025-03-17T17:44:46.7611520Z 2016aa41a004: Pull complete 2025-03-17T17:44:46.8020138Z 8f8e06ffc424: Pull complete 2025-03-17T17:44:46.8225681Z 11a7893112d9: Pull complete 2025-03-17T17:44:47.0621818Z efaaabeb52d4: Pull complete 2025-03-17T17:44:47.0841359Z 6709e381c215: Pull complete 2025-03-17T17:44:47.1046127Z fdad8b1754ce: Pull complete 2025-03-17T17:44:47.1455167Z 5be9bd12f5ac: Pull complete 2025-03-17T17:44:47.1659505Z 489c21ceb4a2: Pull complete 2025-03-17T17:44:47.2074318Z 81f8b136cee2: Pull complete 2025-03-17T17:44:47.2309489Z 60a243865e53: Pull complete 2025-03-17T17:44:47.2714649Z b182a0bac1a1: Pull complete 2025-03-17T17:44:47.3122812Z 95fa35d609c8: Pull complete 2025-03-17T17:44:47.3738980Z e969307a15f3: Pull complete 2025-03-17T17:44:47.4141192Z 062e558f8595: Pull complete 2025-03-17T17:44:47.4350076Z 405a0f898da6: Pull complete 2025-03-17T17:44:47.4782433Z 57c016712ec8: Pull complete 2025-03-17T17:44:47.5186597Z d6e77e8938d4: Pull complete 2025-03-17T17:44:47.5389289Z a74edb648360: Pull complete 2025-03-17T17:44:47.5792617Z 3449b575a6ae: Pull complete 2025-03-17T17:44:47.6230350Z 0c4599265d5e: Pull complete 2025-03-17T17:44:47.6438625Z 8393bf48bd09: Pull complete 2025-03-17T17:44:47.6843419Z a963cfdf0eea: Pull complete 2025-03-17T17:44:47.7054715Z 3cca2552aa9c: Pull complete 2025-03-17T17:44:47.7493150Z 0813babc0471: Pull complete 2025-03-17T17:44:47.7699990Z 3e77e9716ba6: Pull complete 2025-03-17T17:44:54.8525192Z 0d19b4243bce: Pull complete 2025-03-17T17:44:54.9541130Z bf1c8c8ee4ee: Pull complete 2025-03-17T17:44:55.0580172Z d5e6ad68da49: Pull complete 2025-03-17T17:44:55.1539329Z 66d96199d00f: Pull complete 2025-03-17T17:44:55.2079994Z 464d0925da53: Pull complete 2025-03-17T17:44:55.2375449Z 28c8ceadedf6: Pull complete 2025-03-17T17:44:55.2971888Z 9283c1297e78: Pull complete 2025-03-17T17:44:57.0821798Z e267976dd03c: Pull complete 2025-03-17T17:44:57.2193499Z Digest: sha256:2be986ebdf9f912bf00998d401fc1f11365c93a5dd9c7f239a9fcf15540db4d8 2025-03-17T17:44:57.2435200Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:44:57.2619009Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:44:57.2667308Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:44:57.2668330Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-03-17T17:44:57.2676369Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:44:57.2676783Z env: 2025-03-17T17:44:57.2677028Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:44:57.2677505Z ##[endgroup] 2025-03-17T17:44:57.4306479Z Prepare all required actions 2025-03-17T17:44:57.4421785Z ##[group]Run ./.github/actions/get-workflow-job-id 2025-03-17T17:44:57.4422167Z with: 2025-03-17T17:44:57.4422582Z github-token: *** 2025-03-17T17:44:57.4422838Z env: 2025-03-17T17:44:57.4423078Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:44:57.4423365Z ##[endgroup] 2025-03-17T17:44:57.4482881Z ##[group]Run set -eux 2025-03-17T17:44:57.4483191Z set -eux 2025-03-17T17:44:57.4483669Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-03-17T17:44:57.4489722Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:44:57.4490136Z env: 2025-03-17T17:44:57.4490367Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:44:57.4490837Z GITHUB_TOKEN: *** 2025-03-17T17:44:57.4491087Z ##[endgroup] 2025-03-17T17:44:57.4513766Z + python3 .github/scripts/get_workflow_job_id.py 13905937446 i-0287a0cab9cae3fa7 2025-03-17T17:44:58.6849892Z setting job-id=38909654187 2025-03-17T17:44:58.6851043Z setting job-name=linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:44:58.6972995Z ##[group]Run python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 dataclasses_json==0.6.7 2025-03-17T17:44:58.6973785Z python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 dataclasses_json==0.6.7 2025-03-17T17:44:58.6974408Z python3 -m tools.stats.monitor > usage_log.txt 2>&1 & 2025-03-17T17:44:58.6974921Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2025-03-17T17:44:58.6981018Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:44:58.6981430Z env: 2025-03-17T17:44:58.6981669Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:44:58.6981958Z JOB_ID: 38909654187 2025-03-17T17:44:58.6982397Z JOB_NAME: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:44:58.6982900Z WORKFLOW_NAME: pull 2025-03-17T17:44:58.6983180Z WORKFLOW_RUN_ID: 13905937446 2025-03-17T17:44:58.6983475Z ##[endgroup] 2025-03-17T17:44:59.1179591Z Defaulting to user installation because normal site-packages is not writeable 2025-03-17T17:44:59.4893767Z Collecting psutil==5.9.1 2025-03-17T17:44:59.5127991Z Downloading psutil-5.9.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (281 kB) 2025-03-17T17:44:59.5532378Z Collecting nvidia-ml-py==11.525.84 2025-03-17T17:44:59.5564247Z Downloading nvidia_ml_py-11.525.84-py3-none-any.whl (34 kB) 2025-03-17T17:44:59.6169992Z Collecting dataclasses_json==0.6.7 2025-03-17T17:44:59.6202126Z Downloading dataclasses_json-0.6.7-py3-none-any.whl (28 kB) 2025-03-17T17:44:59.7364085Z Collecting marshmallow<4.0.0,>=3.18.0 2025-03-17T17:44:59.7395822Z Downloading marshmallow-3.26.1-py3-none-any.whl (50 kB) 2025-03-17T17:44:59.7623056Z Collecting typing-inspect<1,>=0.4.0 2025-03-17T17:44:59.7655577Z Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB) 2025-03-17T17:44:59.8179034Z Collecting packaging>=17.0 2025-03-17T17:44:59.8210139Z Downloading packaging-24.2-py3-none-any.whl (65 kB) 2025-03-17T17:44:59.8442994Z Collecting mypy-extensions>=0.3.0 2025-03-17T17:44:59.8473757Z Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB) 2025-03-17T17:44:59.8889542Z Collecting typing-extensions>=3.7.4 2025-03-17T17:44:59.8921213Z Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB) 2025-03-17T17:44:59.9806333Z Installing collected packages: typing-extensions, packaging, mypy-extensions, typing-inspect, marshmallow, psutil, nvidia-ml-py, dataclasses-json 2025-03-17T17:45:00.2493973Z Successfully installed dataclasses-json-0.6.7 marshmallow-3.26.1 mypy-extensions-1.0.0 nvidia-ml-py-11.525.84 packaging-24.2 psutil-5.9.1 typing-extensions-4.12.2 typing-inspect-0.9.0 2025-03-17T17:45:00.4251694Z Prepare all required actions 2025-03-17T17:45:00.4252129Z Getting action download info 2025-03-17T17:45:00.5485339Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2025-03-17T17:45:00.8456056Z Download action repository 'actions/download-artifact@v4' (SHA:cc203385981b70ca67e1cc392babf9cc229d5806) 2025-03-17T17:45:01.1453577Z ##[group]Run ./.github/actions/download-build-artifacts 2025-03-17T17:45:01.1453980Z with: 2025-03-17T17:45:01.1454226Z name: linux-focal-py3.13-clang10 2025-03-17T17:45:01.1454563Z s3-bucket: gha-artifacts 2025-03-17T17:45:01.1454846Z env: 2025-03-17T17:45:01.1455083Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:01.1455365Z ##[endgroup] 2025-03-17T17:45:01.1490690Z ##[group]Run seemethere/download-artifact-s3@v4 2025-03-17T17:45:01.1491064Z with: 2025-03-17T17:45:01.1491316Z name: linux-focal-py3.13-clang10 2025-03-17T17:45:01.1491648Z s3-bucket: gha-artifacts 2025-03-17T17:45:01.1491974Z region: us-east-1 2025-03-17T17:45:01.1492223Z env: 2025-03-17T17:45:01.1492455Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:01.1492735Z ##[endgroup] 2025-03-17T17:45:01.6210670Z (node:98244) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-03-17T17:45:01.6211248Z 2025-03-17T17:45:01.6211455Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-03-17T17:45:01.6212040Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-03-17T17:45:01.6212647Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-03-17T17:45:01.8263418Z Found 1 objects with prefix pytorch/pytorch/13905937446/linux-focal-py3.13-clang10/ 2025-03-17T17:45:01.8264198Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-03-17T17:45:05.8709589Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-03-17T17:45:05.8715161Z Artifact download has finished successfully 2025-03-17T17:45:05.8893966Z ##[group]Run unzip -o artifacts.zip 2025-03-17T17:45:05.8894345Z unzip -o artifacts.zip 2025-03-17T17:45:05.8900016Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:05.8900447Z env: 2025-03-17T17:45:05.8900687Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:05.8900975Z ##[endgroup] 2025-03-17T17:45:05.8965308Z Archive: artifacts.zip 2025-03-17T17:45:05.8966519Z creating: dist/ 2025-03-17T17:45:06.8840847Z inflating: dist/torch-2.7.0a0+git52b8690-cp313-cp313-linux_x86_64.whl 2025-03-17T17:45:06.8841780Z creating: build/custom_test_artifacts/ 2025-03-17T17:45:06.8842535Z creating: build/custom_test_artifacts/custom-op-build/ 2025-03-17T17:45:06.8843381Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2025-03-17T17:45:06.8844546Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeOutput.log 2025-03-17T17:45:06.8845839Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/ 2025-03-17T17:45:06.8847125Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CMakeSystem.cmake 2025-03-17T17:45:06.8848494Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdC/ 2025-03-17T17:45:06.8849851Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdC/tmp/ 2025-03-17T17:45:06.8851435Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdC/CMakeCCompilerId.c 2025-03-17T17:45:06.8853001Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdC/a.out 2025-03-17T17:45:06.8854426Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdCXX/ 2025-03-17T17:45:06.8855798Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdCXX/tmp/ 2025-03-17T17:45:06.8857704Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-03-17T17:45:06.8859367Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CompilerIdCXX/a.out 2025-03-17T17:45:06.8860921Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_C.bin 2025-03-17T17:45:06.8862755Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CMakeCCompiler.cmake 2025-03-17T17:45:06.8864346Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_CXX.bin 2025-03-17T17:45:06.8865908Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.18.5/CMakeCXXCompiler.cmake 2025-03-17T17:45:06.8867279Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2025-03-17T17:45:06.8868447Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2025-03-17T17:45:06.8869696Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2025-03-17T17:45:06.8887100Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2025-03-17T17:45:06.8888632Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2025-03-17T17:45:06.8890233Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2025-03-17T17:45:06.8891833Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2025-03-17T17:45:06.8893406Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2025-03-17T17:45:06.8894861Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2025-03-17T17:45:06.8896330Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2025-03-17T17:45:06.8943710Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/CXX.includecache 2025-03-17T17:45:06.8962097Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.internal 2025-03-17T17:45:06.9097928Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2025-03-17T17:45:06.9099326Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2025-03-17T17:45:06.9123723Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2025-03-17T17:45:06.9125331Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2025-03-17T17:45:06.9127001Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2025-03-17T17:45:06.9128687Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2025-03-17T17:45:06.9130347Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2025-03-17T17:45:06.9131845Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2025-03-17T17:45:06.9133449Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2025-03-17T17:45:06.9180337Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/CXX.includecache 2025-03-17T17:45:06.9198590Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.internal 2025-03-17T17:45:06.9245975Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2025-03-17T17:45:06.9247731Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-03-17T17:45:06.9249245Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2025-03-17T17:45:06.9250816Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2025-03-17T17:45:06.9252095Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2025-03-17T17:45:06.9253385Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2025-03-17T17:45:06.9254797Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2025-03-17T17:45:06.9255805Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2025-03-17T17:45:06.9256934Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2025-03-17T17:45:06.9380360Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2025-03-17T17:45:06.9419494Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2025-03-17T17:45:06.9420462Z creating: build/custom_test_artifacts/jit-hook-build/ 2025-03-17T17:45:06.9421330Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2025-03-17T17:45:06.9422601Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeOutput.log 2025-03-17T17:45:06.9423801Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/ 2025-03-17T17:45:06.9425098Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CMakeSystem.cmake 2025-03-17T17:45:06.9426534Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdC/ 2025-03-17T17:45:06.9427834Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdC/tmp/ 2025-03-17T17:45:06.9429375Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdC/CMakeCCompilerId.c 2025-03-17T17:45:06.9430911Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdC/a.out 2025-03-17T17:45:06.9432300Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdCXX/ 2025-03-17T17:45:06.9433669Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdCXX/tmp/ 2025-03-17T17:45:06.9435262Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-03-17T17:45:06.9437085Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CompilerIdCXX/a.out 2025-03-17T17:45:06.9438692Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_C.bin 2025-03-17T17:45:06.9440302Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CMakeCCompiler.cmake 2025-03-17T17:45:06.9441886Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_CXX.bin 2025-03-17T17:45:06.9443413Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.18.5/CMakeCXXCompiler.cmake 2025-03-17T17:45:06.9444684Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2025-03-17T17:45:06.9445800Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2025-03-17T17:45:06.9447058Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2025-03-17T17:45:06.9466838Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2025-03-17T17:45:06.9468357Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2025-03-17T17:45:06.9469999Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2025-03-17T17:45:06.9471629Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2025-03-17T17:45:06.9473242Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2025-03-17T17:45:06.9474712Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2025-03-17T17:45:06.9476205Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2025-03-17T17:45:06.9523453Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/CXX.includecache 2025-03-17T17:45:06.9541511Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.internal 2025-03-17T17:45:06.9573323Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2025-03-17T17:45:06.9575015Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-03-17T17:45:06.9576524Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2025-03-17T17:45:06.9577861Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2025-03-17T17:45:06.9579103Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2025-03-17T17:45:06.9580348Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2025-03-17T17:45:06.9581607Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2025-03-17T17:45:06.9582719Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2025-03-17T17:45:06.9583833Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2025-03-17T17:45:06.9607836Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2025-03-17T17:45:06.9608887Z creating: build/custom_test_artifacts/custom-backend-build/ 2025-03-17T17:45:06.9609844Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2025-03-17T17:45:06.9611106Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeOutput.log 2025-03-17T17:45:06.9612443Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/ 2025-03-17T17:45:06.9613817Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CMakeSystem.cmake 2025-03-17T17:45:06.9615281Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdC/ 2025-03-17T17:45:06.9616727Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdC/tmp/ 2025-03-17T17:45:06.9618409Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdC/CMakeCCompilerId.c 2025-03-17T17:45:06.9620004Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdC/a.out 2025-03-17T17:45:06.9621463Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdCXX/ 2025-03-17T17:45:06.9622943Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdCXX/tmp/ 2025-03-17T17:45:06.9624675Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-03-17T17:45:06.9626524Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CompilerIdCXX/a.out 2025-03-17T17:45:06.9628227Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_C.bin 2025-03-17T17:45:06.9629959Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CMakeCCompiler.cmake 2025-03-17T17:45:06.9631613Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CMakeDetermineCompilerABI_CXX.bin 2025-03-17T17:45:06.9633276Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.18.5/CMakeCXXCompiler.cmake 2025-03-17T17:45:06.9634605Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2025-03-17T17:45:06.9635872Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2025-03-17T17:45:06.9637410Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2025-03-17T17:45:06.9657631Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2025-03-17T17:45:06.9659606Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2025-03-17T17:45:06.9661473Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2025-03-17T17:45:06.9663354Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2025-03-17T17:45:06.9665230Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2025-03-17T17:45:06.9667054Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2025-03-17T17:45:06.9668803Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2025-03-17T17:45:06.9714383Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/CXX.includecache 2025-03-17T17:45:06.9732393Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.internal 2025-03-17T17:45:06.9756758Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2025-03-17T17:45:06.9758407Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2025-03-17T17:45:06.9761933Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2025-03-17T17:45:06.9763619Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2025-03-17T17:45:06.9765401Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2025-03-17T17:45:06.9767177Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2025-03-17T17:45:06.9768776Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2025-03-17T17:45:06.9770463Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2025-03-17T17:45:06.9772051Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2025-03-17T17:45:06.9773758Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/CXX.includecache 2025-03-17T17:45:06.9777670Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.internal 2025-03-17T17:45:06.9857221Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2025-03-17T17:45:06.9859097Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-03-17T17:45:06.9860740Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2025-03-17T17:45:06.9862214Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2025-03-17T17:45:06.9863596Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2025-03-17T17:45:06.9864979Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2025-03-17T17:45:06.9866401Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2025-03-17T17:45:06.9867639Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2025-03-17T17:45:06.9868842Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2025-03-17T17:45:06.9934471Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2025-03-17T17:45:06.9954307Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2025-03-17T17:45:06.9955258Z creating: build/lib/ 2025-03-17T17:45:06.9955779Z inflating: build/lib/libclog.a 2025-03-17T17:45:07.0014384Z inflating: build/lib/libgtest.a 2025-03-17T17:45:07.0023667Z inflating: build/lib/libpthreadpool.a 2025-03-17T17:45:07.0030856Z inflating: build/lib/libcpuinfo_internals.a 2025-03-17T17:45:07.0038751Z inflating: build/lib/libcpuinfo.a 2025-03-17T17:45:07.0046658Z inflating: build/lib/libittnotify.a 2025-03-17T17:45:07.0106790Z inflating: build/lib/libbenchmark.a 2025-03-17T17:45:07.0178064Z inflating: build/lib/libprotobuf-lite.a 2025-03-17T17:45:07.0202357Z inflating: build/lib/libtensorpipe_uv.a 2025-03-17T17:45:07.0281415Z inflating: build/lib/libgloo.a 2025-03-17T17:45:07.0340896Z inflating: build/lib/libasmjit.a 2025-03-17T17:45:07.0710120Z inflating: build/lib/libprotobuf.a 2025-03-17T17:45:07.0727162Z inflating: build/lib/libfmt.a 2025-03-17T17:45:07.0822909Z inflating: build/lib/libc10.so 2025-03-17T17:45:07.0824196Z inflating: build/lib/libtorch_global_deps.so 2025-03-17T17:45:07.0841272Z inflating: build/lib/libpytorch_qnnpack.a 2025-03-17T17:45:07.0843461Z inflating: build/lib/libnnpack_reference_layers.a 2025-03-17T17:45:07.1254009Z inflating: build/lib/libprotoc.a 2025-03-17T17:45:07.1254685Z inflating: build/lib/libgtest_main.a 2025-03-17T17:45:07.1265589Z inflating: build/lib/libgmock.a 2025-03-17T17:45:07.1266358Z inflating: build/lib/libbenchmark_main.a 2025-03-17T17:45:07.1716215Z inflating: build/lib/libtensorpipe.a 2025-03-17T17:45:07.1732271Z inflating: build/lib/libnnpack.a 2025-03-17T17:45:07.2670831Z inflating: build/lib/libfbgemm.a 2025-03-17T17:45:08.5099595Z inflating: build/lib/libdnnl.a 2025-03-17T17:45:08.5339629Z inflating: build/lib/libkineto.a 2025-03-17T17:45:08.5340355Z inflating: build/lib/libgmock_main.a 2025-03-17T17:45:08.5380673Z inflating: build/lib/libonnx_proto.a 2025-03-17T17:45:08.6055225Z inflating: build/lib/libonnx.a 2025-03-17T17:45:08.6218087Z inflating: build/lib/libmicrokernels-prod.a 2025-03-17T17:45:08.6325501Z inflating: build/lib/libXNNPACK.a 2025-03-17T17:45:08.6965110Z inflating: build/lib/libmicrokernels-all.a 2025-03-17T17:45:11.0456524Z inflating: build/lib/libtorch_cpu.so 2025-03-17T17:45:11.0457394Z inflating: build/lib/libtorch.so 2025-03-17T17:45:11.0461545Z inflating: build/lib/libunbox_lib.a 2025-03-17T17:45:11.0465365Z inflating: build/lib/libshm.so 2025-03-17T17:45:11.0485854Z inflating: build/lib/libjitbackend_test.so 2025-03-17T17:45:11.0557000Z inflating: build/lib/libtorchbind_test.so 2025-03-17T17:45:11.0581231Z inflating: build/lib/libbackend_with_compiler.so 2025-03-17T17:45:11.0607239Z inflating: build/lib/libaoti_custom_ops.so 2025-03-17T17:45:11.2560398Z inflating: build/lib/libtorch_python.so 2025-03-17T17:45:11.2595882Z inflating: build/lib/libnnapi_backend.so 2025-03-17T17:45:11.2596289Z creating: build/bin/ 2025-03-17T17:45:11.2596604Z creating: build/bin/CMakeFiles/ 2025-03-17T17:45:11.2597199Z inflating: build/bin/CMakeFiles/CMakeDirectoryInformation.cmake 2025-03-17T17:45:11.2597772Z extracting: build/bin/CMakeFiles/progress.marks 2025-03-17T17:45:11.2598358Z inflating: build/bin/Makefile 2025-03-17T17:45:11.2598860Z inflating: build/bin/cmake_install.cmake 2025-03-17T17:45:11.2599573Z inflating: build/bin/CTestTestfile.cmake 2025-03-17T17:45:11.2648229Z inflating: build/bin/c10_TypeIndex_test 2025-03-17T17:45:11.2695753Z inflating: build/bin/c10_Synchronized_test 2025-03-17T17:45:11.2746246Z inflating: build/bin/c10_Metaprogramming_test 2025-03-17T17:45:11.2793585Z inflating: build/bin/c10_ConstexprCrc_test 2025-03-17T17:45:11.2842069Z inflating: build/bin/c10_ssize_test 2025-03-17T17:45:11.2894196Z inflating: build/bin/c10_LeftRight_test 2025-03-17T17:45:11.2941558Z inflating: build/bin/c10_DeadlockDetection_test 2025-03-17T17:45:11.2989604Z inflating: build/bin/c10_Half_test 2025-03-17T17:45:11.3041252Z inflating: build/bin/c10_ThreadLocal_test 2025-03-17T17:45:11.3090669Z inflating: build/bin/c10_NetworkFlow_test 2025-03-17T17:45:11.3163142Z inflating: build/bin/c10_optional_test 2025-03-17T17:45:11.3217389Z inflating: build/bin/c10_DispatchKeySet_test 2025-03-17T17:45:11.3274767Z inflating: build/bin/c10_ordered_preserving_dict_test 2025-03-17T17:45:11.3321484Z inflating: build/bin/c10_StreamGuard_test 2025-03-17T17:45:11.3369136Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2025-03-17T17:45:11.3416489Z inflating: build/bin/c10_tempfile_test 2025-03-17T17:45:11.3463536Z inflating: build/bin/c10_TypeTraits_test 2025-03-17T17:45:11.3512882Z inflating: build/bin/c10_Device_test 2025-03-17T17:45:11.3560353Z inflating: build/bin/c10_error_test 2025-03-17T17:45:11.3608919Z inflating: build/bin/c10_DeviceGuard_test 2025-03-17T17:45:11.3659902Z inflating: build/bin/c10_typeid_test 2025-03-17T17:45:11.3710138Z inflating: build/bin/c10_Scalar_test 2025-03-17T17:45:11.3773664Z inflating: build/bin/c10_cow_test 2025-03-17T17:45:11.3821807Z inflating: build/bin/c10_SymInt_test 2025-03-17T17:45:11.3871753Z inflating: build/bin/c10_Bitset_test 2025-03-17T17:45:11.3924393Z inflating: build/bin/c10_SizesAndStrides_test 2025-03-17T17:45:11.3971707Z inflating: build/bin/c10_ArrayRef_test 2025-03-17T17:45:11.4023954Z inflating: build/bin/c10_InlineStreamGuard_test 2025-03-17T17:45:11.4075252Z inflating: build/bin/c10_InlineDeviceGuard_test 2025-03-17T17:45:11.4122850Z inflating: build/bin/c10_TypeList_test 2025-03-17T17:45:11.4171977Z inflating: build/bin/c10_accumulate_test 2025-03-17T17:45:11.4219669Z inflating: build/bin/c10_bit_cast_test 2025-03-17T17:45:11.4266754Z inflating: build/bin/c10_string_view_test 2025-03-17T17:45:11.4315210Z inflating: build/bin/c10_irange_test 2025-03-17T17:45:11.4366885Z inflating: build/bin/c10_bfloat16_test 2025-03-17T17:45:11.4416738Z inflating: build/bin/c10_exception_test 2025-03-17T17:45:11.4468566Z inflating: build/bin/c10_complex_test 2025-03-17T17:45:11.4516832Z inflating: build/bin/c10_flags_test 2025-03-17T17:45:11.4564056Z inflating: build/bin/c10_generic_math_test 2025-03-17T17:45:11.4617951Z inflating: build/bin/c10_complex_math_test 2025-03-17T17:45:11.4749464Z inflating: build/bin/c10_intrusive_ptr_test 2025-03-17T17:45:11.4801484Z inflating: build/bin/c10_logging_test 2025-03-17T17:45:11.4852344Z inflating: build/bin/c10_registry_test 2025-03-17T17:45:11.4984944Z inflating: build/bin/c10_small_vector_test 2025-03-17T17:45:11.5033803Z inflating: build/bin/c10_string_util_test 2025-03-17T17:45:11.5072067Z inflating: build/bin/c10_intrusive_ptr_benchmark 2025-03-17T17:45:11.5121343Z inflating: build/bin/c10_lazy_test 2025-03-17T17:45:11.5476997Z inflating: build/bin/protoc-3.13.0.0 2025-03-17T17:45:11.5831911Z inflating: build/bin/protoc 2025-03-17T17:45:11.6222034Z inflating: build/bin/vec_test_all_types_DEFAULT 2025-03-17T17:45:11.6611555Z inflating: build/bin/vec_test_all_types_AVX512 2025-03-17T17:45:11.7052787Z inflating: build/bin/vec_test_all_types_AVX2 2025-03-17T17:45:11.7102318Z inflating: build/bin/HashStoreTest 2025-03-17T17:45:11.7152735Z inflating: build/bin/FileStoreTest 2025-03-17T17:45:11.7204431Z inflating: build/bin/TCPStoreTest 2025-03-17T17:45:11.7268053Z inflating: build/bin/ProcessGroupGlooTest 2025-03-17T17:45:11.7316998Z inflating: build/bin/BackoffTest 2025-03-17T17:45:11.7319702Z inflating: build/bin/example_allreduce 2025-03-17T17:45:11.7371989Z inflating: build/bin/test_dist_autograd 2025-03-17T17:45:11.7374178Z inflating: build/bin/parallel_benchmark 2025-03-17T17:45:11.7382799Z inflating: build/bin/aot_model_compiler_test 2025-03-17T17:45:11.7446777Z inflating: build/bin/test_cpp_rpc 2025-03-17T17:45:11.7493398Z inflating: build/bin/op_allowlist_test 2025-03-17T17:45:11.7557331Z inflating: build/bin/test_mobile_nnc 2025-03-17T17:45:11.7609355Z inflating: build/bin/backend_fallback_test 2025-03-17T17:45:11.7701817Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2025-03-17T17:45:11.7760214Z inflating: build/bin/kernel_stackbased_test 2025-03-17T17:45:11.7851501Z inflating: build/bin/kernel_function_test 2025-03-17T17:45:11.7973358Z inflating: build/bin/kernel_function_legacy_test 2025-03-17T17:45:11.8033824Z inflating: build/bin/KernelFunction_test 2025-03-17T17:45:11.8089861Z inflating: build/bin/IListRef_test 2025-03-17T17:45:11.8138746Z inflating: build/bin/xla_tensor_test 2025-03-17T17:45:11.8195743Z inflating: build/bin/type_test 2025-03-17T17:45:11.8506548Z inflating: build/bin/test_lazy 2025-03-17T17:45:11.8576082Z inflating: build/bin/legacy_vmap_test 2025-03-17T17:45:11.8625538Z inflating: build/bin/type_ptr_test 2025-03-17T17:45:11.8701621Z inflating: build/bin/tensor_iterator_test 2025-03-17T17:45:11.8751179Z inflating: build/bin/stride_properties_test 2025-03-17T17:45:11.8799680Z inflating: build/bin/StorageUtils_test 2025-03-17T17:45:11.8854345Z inflating: build/bin/apply_utils_test 2025-03-17T17:45:11.8903262Z inflating: build/bin/weakref_test 2025-03-17T17:45:11.8955932Z inflating: build/bin/NamedTensor_test 2025-03-17T17:45:11.9022822Z inflating: build/bin/Dict_test 2025-03-17T17:45:11.9078111Z inflating: build/bin/scalar_test 2025-03-17T17:45:11.9129764Z inflating: build/bin/broadcast_test 2025-03-17T17:45:11.9190223Z inflating: build/bin/basic 2025-03-17T17:45:11.9244915Z inflating: build/bin/cpu_generator_test 2025-03-17T17:45:11.9307223Z inflating: build/bin/MaybeOwned_test 2025-03-17T17:45:11.9405811Z inflating: build/bin/kernel_lambda_test 2025-03-17T17:45:11.9456841Z inflating: build/bin/cpu_profiling_allocator_test 2025-03-17T17:45:11.9508050Z inflating: build/bin/test_parallel 2025-03-17T17:45:11.9558681Z inflating: build/bin/half_test 2025-03-17T17:45:11.9608542Z inflating: build/bin/static_runtime_bench 2025-03-17T17:45:11.9657415Z inflating: build/bin/cpu_allocator_test 2025-03-17T17:45:11.9782481Z inflating: build/bin/kernel_lambda_legacy_test 2025-03-17T17:45:11.9831818Z inflating: build/bin/Dimname_test 2025-03-17T17:45:11.9833626Z inflating: build/bin/verify_api_visibility 2025-03-17T17:45:11.9890058Z inflating: build/bin/atest 2025-03-17T17:45:11.9940389Z inflating: build/bin/memory_overlapping_test 2025-03-17T17:45:11.9988343Z inflating: build/bin/dispatch_key_set_test 2025-03-17T17:45:12.0076416Z inflating: build/bin/cpu_rng_test 2025-03-17T17:45:12.0135036Z inflating: build/bin/inline_container_test 2025-03-17T17:45:12.0137839Z inflating: build/bin/thread_init_test 2025-03-17T17:45:12.0186327Z inflating: build/bin/operators_test 2025-03-17T17:45:12.0468849Z inflating: build/bin/static_runtime_test 2025-03-17T17:45:12.0564620Z inflating: build/bin/List_test 2025-03-17T17:45:12.0613486Z inflating: build/bin/wrapdim_test 2025-03-17T17:45:12.0662124Z inflating: build/bin/operator_name_test 2025-03-17T17:45:12.0710652Z inflating: build/bin/dlconvertor_test 2025-03-17T17:45:12.0760285Z inflating: build/bin/undefined_tensor_test 2025-03-17T17:45:12.0816324Z inflating: build/bin/extension_backend_test 2025-03-17T17:45:12.0864164Z inflating: build/bin/lazy_tensor_test 2025-03-17T17:45:12.0957728Z inflating: build/bin/ivalue_test 2025-03-17T17:45:12.1008409Z inflating: build/bin/mobile_memory_cleanup 2025-03-17T17:45:12.1056541Z inflating: build/bin/CppSignature_test 2025-03-17T17:45:12.1110327Z inflating: build/bin/scalar_tensor_test 2025-03-17T17:45:12.1416836Z inflating: build/bin/op_registration_test 2025-03-17T17:45:12.1467377Z inflating: build/bin/math_kernel_test 2025-03-17T17:45:12.1517860Z inflating: build/bin/memory_format_test 2025-03-17T17:45:12.1572009Z inflating: build/bin/native_test 2025-03-17T17:45:12.1621648Z inflating: build/bin/packedtensoraccessor_test 2025-03-17T17:45:12.1689417Z inflating: build/bin/pow_test 2025-03-17T17:45:12.1737297Z inflating: build/bin/reduce_ops_test 2025-03-17T17:45:12.2991267Z inflating: build/bin/test_api 2025-03-17T17:45:12.3045493Z inflating: build/bin/quantized_test 2025-03-17T17:45:12.3094533Z inflating: build/bin/reportMemoryUsage_test 2025-03-17T17:45:12.3146165Z inflating: build/bin/test_edge_op_registration 2025-03-17T17:45:12.3150722Z inflating: build/bin/torch_shm_manager 2025-03-17T17:45:12.3166720Z inflating: build/bin/tutorial_tensorexpr 2025-03-17T17:45:12.4152223Z inflating: build/bin/test_tensorexpr 2025-03-17T17:45:12.4708156Z inflating: build/bin/test_jit 2025-03-17T17:45:12.4708800Z creating: .additional_ci_files/ 2025-03-17T17:45:12.4809933Z inflating: .additional_ci_files/test-times.json 2025-03-17T17:45:12.5203662Z inflating: .additional_ci_files/test-class-times.json 2025-03-17T17:45:12.5234096Z ##[group]Run rm artifacts.zip 2025-03-17T17:45:12.5234443Z rm artifacts.zip 2025-03-17T17:45:12.5240464Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:12.5240895Z env: 2025-03-17T17:45:12.5241146Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:12.5241437Z ##[endgroup] 2025-03-17T17:45:12.5578533Z ##[group]Run df -H 2025-03-17T17:45:12.5578809Z df -H 2025-03-17T17:45:12.5584084Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:12.5584491Z env: 2025-03-17T17:45:12.5584733Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:12.5585017Z ##[endgroup] 2025-03-17T17:45:12.5767251Z Filesystem Size Used Avail Use% Mounted on 2025-03-17T17:45:12.5767668Z devtmpfs 4.2M 0 4.2M 0% /dev 2025-03-17T17:45:12.5768035Z tmpfs 8.2G 33k 8.2G 1% /dev/shm 2025-03-17T17:45:12.5768404Z tmpfs 3.3G 488k 3.3G 1% /run 2025-03-17T17:45:12.5768756Z /dev/nvme0n1p1 161G 26G 136G 16% / 2025-03-17T17:45:12.5769328Z tmpfs 8.2G 21k 8.2G 1% /tmp 2025-03-17T17:45:12.5770020Z /dev/nvme0n1p128 11M 1.4M 9.2M 13% /boot/efi 2025-03-17T17:45:12.5830619Z Prepare all required actions 2025-03-17T17:45:12.5831041Z Getting action download info 2025-03-17T17:45:12.7151783Z ##[group]Run ./.github/actions/download-td-artifacts 2025-03-17T17:45:12.7152157Z with: 2025-03-17T17:45:12.7152376Z env: 2025-03-17T17:45:12.7152626Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:12.7152905Z ##[endgroup] 2025-03-17T17:45:12.7231239Z ##[group]Run seemethere/download-artifact-s3@v4 2025-03-17T17:45:12.7231607Z with: 2025-03-17T17:45:12.7231843Z name: td_results 2025-03-17T17:45:12.7232112Z s3-bucket: gha-artifacts 2025-03-17T17:45:12.7232407Z region: us-east-1 2025-03-17T17:45:12.7232656Z env: 2025-03-17T17:45:12.7232888Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:12.7233209Z ##[endgroup] 2025-03-17T17:45:13.1779317Z (node:98264) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-03-17T17:45:13.1779871Z 2025-03-17T17:45:13.1780164Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-03-17T17:45:13.1780758Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-03-17T17:45:13.1781366Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-03-17T17:45:13.2685310Z Found 1 objects with prefix pytorch/pytorch/13905937446/td_results/ 2025-03-17T17:45:13.2686029Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/td_results.json 2025-03-17T17:45:13.3274433Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/td_results.json 2025-03-17T17:45:13.3280190Z Artifact download has finished successfully 2025-03-17T17:45:13.3534633Z ##[group]Run mkdir -p .additional_ci_files 2025-03-17T17:45:13.3535042Z mkdir -p .additional_ci_files 2025-03-17T17:45:13.3535515Z mv td_results.json .additional_ci_files/td_results.json || true 2025-03-17T17:45:13.3542180Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:13.3542600Z env: 2025-03-17T17:45:13.3542861Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:13.3543151Z ##[endgroup] 2025-03-17T17:45:13.3886332Z ##[group]Run .github/scripts/parse_ref.py 2025-03-17T17:45:13.3886743Z .github/scripts/parse_ref.py 2025-03-17T17:45:13.3892889Z shell: /usr/bin/bash -e {0} 2025-03-17T17:45:13.3893185Z env: 2025-03-17T17:45:13.3893572Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:13.3893862Z ##[endgroup] 2025-03-17T17:45:13.4254555Z Prepare all required actions 2025-03-17T17:45:13.4255146Z Getting action download info 2025-03-17T17:45:13.5694324Z ##[group]Run ./.github/actions/filter-test-configs 2025-03-17T17:45:13.5694830Z with: 2025-03-17T17:45:13.5695416Z github-token: *** 2025-03-17T17:45:13.5698328Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "crossref", "shard": 1, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "crossref", "shard": 2, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 1, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 2, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 3, "num_shards": 3, "runner": "linux.2xlarge"}]} 2025-03-17T17:45:13.5701448Z job-name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:13.5701949Z env: 2025-03-17T17:45:13.5702185Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:13.5702473Z ##[endgroup] 2025-03-17T17:45:13.5788312Z ##[group]Run nick-fields/retry@v3.0.0 2025-03-17T17:45:13.5788670Z with: 2025-03-17T17:45:13.5788904Z shell: bash 2025-03-17T17:45:13.5789154Z timeout_minutes: 10 2025-03-17T17:45:13.5789426Z max_attempts: 5 2025-03-17T17:45:13.5789687Z retry_wait_seconds: 30 2025-03-17T17:45:13.5790527Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.1 2025-03-17T17:45:13.5791406Z polling_interval_seconds: 1 2025-03-17T17:45:13.5791729Z warning_on_retry: true 2025-03-17T17:45:13.5792025Z continue_on_error: false 2025-03-17T17:45:13.5792304Z env: 2025-03-17T17:45:13.5792535Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:13.5793069Z GITHUB_TOKEN: *** 2025-03-17T17:45:13.5793330Z ##[endgroup] 2025-03-17T17:45:13.6768829Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.1 2025-03-17T17:45:13.9821336Z Defaulting to user installation because normal site-packages is not writeable 2025-03-17T17:45:14.2261691Z Collecting requests==2.27.1 2025-03-17T17:45:14.2501178Z Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) 2025-03-17T17:45:14.4530023Z Collecting pyyaml==6.0.1 2025-03-17T17:45:14.4567892Z Downloading PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (738 kB) 2025-03-17T17:45:14.5263551Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (1.25.10) 2025-03-17T17:45:14.5273120Z Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (2.10) 2025-03-17T17:45:14.5830871Z Collecting certifi>=2017.4.17 2025-03-17T17:45:14.5870818Z Downloading certifi-2025.1.31-py3-none-any.whl (166 kB) 2025-03-17T17:45:14.9149692Z Collecting charset-normalizer~=2.0.0 2025-03-17T17:45:14.9182937Z Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) 2025-03-17T17:45:15.0506241Z Installing collected packages: charset-normalizer, certifi, requests, pyyaml 2025-03-17T17:45:15.2057400Z Successfully installed certifi-2025.1.31 charset-normalizer-2.0.12 pyyaml-6.0.1 requests-2.27.1 2025-03-17T17:45:15.6541129Z Command completed after 1 attempt(s). 2025-03-17T17:45:15.6599181Z ##[group]Run set -x 2025-03-17T17:45:15.6599480Z set -x 2025-03-17T17:45:15.6599733Z  2025-03-17T17:45:15.6600151Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-03-17T17:45:15.6600675Z # in runner workspace 2025-03-17T17:45:15.6601247Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2025-03-17T17:45:15.6607364Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:15.6607779Z env: 2025-03-17T17:45:15.6608025Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:15.6608313Z ##[endgroup] 2025-03-17T17:45:15.6632317Z + python3 /home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2025-03-17T17:45:15.6867261Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2025-03-17T17:45:15.6867709Z echo "Workflow: ${GITHUB_WORKFLOW}" 2025-03-17T17:45:15.6868079Z echo "Job name: ${JOB_NAME}" 2025-03-17T17:45:15.6868401Z  2025-03-17T17:45:15.6868808Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-03-17T17:45:15.6869323Z # in runner workspace 2025-03-17T17:45:15.6869784Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2025-03-17T17:45:15.6870312Z  --workflow "${GITHUB_WORKFLOW}" \ 2025-03-17T17:45:15.6870685Z  --job-name "${JOB_NAME}" \ 2025-03-17T17:45:15.6873443Z  --test-matrix "{"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "crossref", "shard": 1, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "crossref", "shard": 2, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 1, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 2, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 3, "num_shards": 3, "runner": "linux.2xlarge"}]}" \ 2025-03-17T17:45:15.6876217Z  --selected-test-configs "" \ 2025-03-17T17:45:15.6876615Z  --pr-number "${PR_NUMBER}" \ 2025-03-17T17:45:15.6877018Z  --tag "${TAG}" \ 2025-03-17T17:45:15.6877333Z  --event-name "${EVENT_NAME}" \ 2025-03-17T17:45:15.6877685Z  --schedule "${SCHEDULE}" \ 2025-03-17T17:45:15.6878026Z  --branch "${HEAD_BRANCH}" 2025-03-17T17:45:15.6883432Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:15.6883845Z env: 2025-03-17T17:45:15.6884092Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:15.6884578Z GITHUB_TOKEN: *** 2025-03-17T17:45:15.6885011Z JOB_NAME: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:15.6885516Z PR_NUMBER: 148585 2025-03-17T17:45:15.6885774Z TAG: 2025-03-17T17:45:15.6886019Z EVENT_NAME: pull_request 2025-03-17T17:45:15.6886302Z SCHEDULE: 2025-03-17T17:45:15.6886540Z HEAD_BRANCH: 2025-03-17T17:45:15.6886797Z ##[endgroup] 2025-03-17T17:45:15.6907756Z Workflow: pull 2025-03-17T17:45:15.9545794Z INFO:root:Found no test-config label on the PR, so all test configs are included 2025-03-17T17:45:15.9547039Z Job name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:16.1013568Z ##[group]Run echo "Filtered matrix:" 2025-03-17T17:45:16.1013967Z echo "Filtered matrix:" 2025-03-17T17:45:16.1016993Z echo "{"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "linux.4xlarge"}, {"config": "crossref", "shard": 1, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "crossref", "shard": 2, "num_shards": 2, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 1, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 2, "num_shards": 3, "runner": "linux.2xlarge"}, {"config": "dynamo_wrapped", "shard": 3, "num_shards": 3, "runner": "linux.2xlarge"}]}" 2025-03-17T17:45:16.1019804Z  2025-03-17T17:45:16.1020038Z echo 2025-03-17T17:45:16.1020341Z echo "Is the current job unstable? False" 2025-03-17T17:45:16.1020731Z  2025-03-17T17:45:16.1020957Z echo 2025-03-17T17:45:16.1021238Z echo "Is keep-going label set? False" 2025-03-17T17:45:16.1021589Z  2025-03-17T17:45:16.1021814Z echo 2025-03-17T17:45:16.1022072Z echo "Renabled issues? " 2025-03-17T17:45:16.1028139Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:16.1028560Z env: 2025-03-17T17:45:16.1028809Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:16.1029099Z ##[endgroup] 2025-03-17T17:45:16.1052003Z Filtered matrix: 2025-03-17T17:45:16.1055019Z {include: [{config: default, shard: 1, num_shards: 5, runner: linux.4xlarge}, {config: default, shard: 2, num_shards: 5, runner: linux.4xlarge}, {config: default, shard: 3, num_shards: 5, runner: linux.4xlarge}, {config: default, shard: 4, num_shards: 5, runner: linux.4xlarge}, {config: default, shard: 5, num_shards: 5, runner: linux.4xlarge}, {config: crossref, shard: 1, num_shards: 2, runner: linux.2xlarge}, {config: crossref, shard: 2, num_shards: 2, runner: linux.2xlarge}, {config: dynamo_wrapped, shard: 1, num_shards: 3, runner: linux.2xlarge}, {config: dynamo_wrapped, shard: 2, num_shards: 3, runner: linux.2xlarge}, {config: dynamo_wrapped, shard: 3, num_shards: 3, runner: linux.2xlarge}]} 2025-03-17T17:45:16.1057554Z 2025-03-17T17:45:16.1057696Z Is the current job unstable? False 2025-03-17T17:45:16.1057912Z 2025-03-17T17:45:16.1058041Z Is keep-going label set? False 2025-03-17T17:45:16.1058239Z 2025-03-17T17:45:16.1058350Z Renabled issues? 2025-03-17T17:45:16.1106355Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-03-17T17:45:16.1106938Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-03-17T17:45:16.1112209Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T17:45:16.1112615Z env: 2025-03-17T17:45:16.1112855Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:16.1113141Z JOB_TIMEOUT: 600 2025-03-17T17:45:16.1113383Z ##[endgroup] 2025-03-17T17:45:16.1186873Z ##[group]Run set -x 2025-03-17T17:45:16.1187248Z set -x 2025-03-17T17:45:16.1187500Z  2025-03-17T17:45:16.1187765Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2025-03-17T17:45:16.1188193Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2025-03-17T17:45:16.1188622Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2025-03-17T17:45:16.1189019Z  TEST_COMMAND=.ci/onnx/test.sh 2025-03-17T17:45:16.1189356Z else 2025-03-17T17:45:16.1189630Z  TEST_COMMAND=.ci/pytorch/test.sh 2025-03-17T17:45:16.1189979Z fi 2025-03-17T17:45:16.1190209Z  2025-03-17T17:45:16.1190502Z # Leaving 1GB for the runner and other things 2025-03-17T17:45:16.1191126Z TOTAL_AVAILABLE_MEMORY_IN_GB=$(awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo) 2025-03-17T17:45:16.1192069Z # https://docs.docker.com/engine/containers/resource_constraints/#--memory-swap-details, the 3GB swap 2025-03-17T17:45:16.1192826Z # comes from https://github.com/pytorch/test-infra/pull/6058 2025-03-17T17:45:16.1193407Z TOTAL_MEMORY_WITH_SWAP=$(("${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}" + 3)) 2025-03-17T17:45:16.1193864Z  2025-03-17T17:45:16.1194161Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-03-17T17:45:16.1194541Z  SHM_OPTS= 2025-03-17T17:45:16.1194818Z  JENKINS_USER= 2025-03-17T17:45:16.1195198Z  # ensure that docker container cleanly exits in 12 hours 2025-03-17T17:45:16.1195703Z  # if for some reason cleanup action doesn't stop container 2025-03-17T17:45:16.1196257Z  # when job is cancelled 2025-03-17T17:45:16.1196601Z  DOCKER_SHELL_CMD="sleep 12h" 2025-03-17T17:45:16.1196930Z  2025-03-17T17:45:16.1197329Z  # since some steps are skipped on s390x, if they are necessary, run them here 2025-03-17T17:45:16.1197922Z  env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-03-17T17:45:16.1198408Z  env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-03-17T17:45:16.1198794Z else 2025-03-17T17:45:16.1199070Z  SHM_OPTS="--shm-size=${SHM_SIZE}" 2025-03-17T17:45:16.1199440Z  JENKINS_USER="--user jenkins" 2025-03-17T17:45:16.1199786Z  DOCKER_SHELL_CMD= 2025-03-17T17:45:16.1200074Z fi 2025-03-17T17:45:16.1200311Z  2025-03-17T17:45:16.1200681Z # detached container should get cleaned up by teardown_ec2_linux 2025-03-17T17:45:16.1201264Z # TODO: Stop building test binaries as part of the build phase 2025-03-17T17:45:16.1201926Z # Used for GPU_FLAG, SHM_OPTS, JENKINS_USER and DOCKER_SHELL_CMD since that doesn't play nice 2025-03-17T17:45:16.1202505Z # shellcheck disable=SC2086,SC2090 2025-03-17T17:45:16.1202875Z container_name=$(docker run \ 2025-03-17T17:45:16.1203219Z  ${GPU_FLAG:-} \ 2025-03-17T17:45:16.1203552Z  ${SCCACHE_SERVER_PORT_DOCKER_FLAG:-} \ 2025-03-17T17:45:16.1203929Z  -e BUILD_ENVIRONMENT \ 2025-03-17T17:45:16.1204258Z  -e PR_NUMBER \ 2025-03-17T17:45:16.1204559Z  -e GITHUB_ACTIONS \ 2025-03-17T17:45:16.1204868Z  -e GITHUB_REPOSITORY \ 2025-03-17T17:45:16.1205196Z  -e GITHUB_WORKFLOW \ 2025-03-17T17:45:16.1205510Z  -e GITHUB_JOB \ 2025-03-17T17:45:16.1205811Z  -e GITHUB_RUN_ID \ 2025-03-17T17:45:16.1206120Z  -e GITHUB_RUN_NUMBER \ 2025-03-17T17:45:16.1206451Z  -e GITHUB_RUN_ATTEMPT \ 2025-03-17T17:45:16.1206777Z  -e JOB_ID \ 2025-03-17T17:45:16.1207054Z  -e JOB_NAME \ 2025-03-17T17:45:16.1207341Z  -e BASE_SHA \ 2025-03-17T17:45:16.1207622Z  -e BRANCH \ 2025-03-17T17:45:16.1207896Z  -e SHA1 \ 2025-03-17T17:45:16.1208174Z  -e AWS_DEFAULT_REGION \ 2025-03-17T17:45:16.1208625Z  -e IN_WHEEL_TEST \ 2025-03-17T17:45:16.1208937Z  -e SHARD_NUMBER \ 2025-03-17T17:45:16.1209241Z  -e TEST_CONFIG \ 2025-03-17T17:45:16.1209547Z  -e NUM_TEST_SHARDS \ 2025-03-17T17:45:16.1209866Z  -e REENABLED_ISSUES \ 2025-03-17T17:45:16.1210197Z  -e CONTINUE_THROUGH_ERROR \ 2025-03-17T17:45:16.1210528Z  -e VERBOSE_TEST_LOGS \ 2025-03-17T17:45:16.1210852Z  -e TEST_SHOWLOCALS \ 2025-03-17T17:45:16.1211177Z  -e NO_TEST_TIMEOUT \ 2025-03-17T17:45:16.1211485Z  -e NO_TD \ 2025-03-17T17:45:16.1211767Z  -e TD_DISTRIBUTED \ 2025-03-17T17:45:16.1212084Z  -e PR_LABELS \ 2025-03-17T17:45:16.1212411Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2025-03-17T17:45:16.1212777Z  -e SCCACHE_BUCKET \ 2025-03-17T17:45:16.1213085Z  -e SCCACHE_REGION \ 2025-03-17T17:45:16.1213387Z  -e XLA_CUDA \ 2025-03-17T17:45:16.1213702Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2025-03-17T17:45:16.1214096Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2025-03-17T17:45:16.1214498Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2025-03-17T17:45:16.1214901Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2025-03-17T17:45:16.1215277Z  -e HUGGING_FACE_HUB_TOKEN \ 2025-03-17T17:45:16.1215637Z  -e SCRIBE_GRAPHQL_ACCESS_TOKEN \ 2025-03-17T17:45:16.1215990Z  -e DASHBOARD_TAG \ 2025-03-17T17:45:16.1216299Z  -e IS_A100_RUNNER \ 2025-03-17T17:45:16.1216620Z  -e ARTIFACTS_FILE_SUFFIX \ 2025-03-17T17:45:16.1217020Z  --memory="${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}g" \ 2025-03-17T17:45:16.1217526Z  --memory-swap="${TOTAL_MEMORY_WITH_SWAP}g" \ 2025-03-17T17:45:16.1217974Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2025-03-17T17:45:16.1218404Z  --security-opt seccomp=unconfined \ 2025-03-17T17:45:16.1218781Z  --cap-add=SYS_PTRACE \ 2025-03-17T17:45:16.1219109Z  --ipc=host \ 2025-03-17T17:45:16.1219393Z  ${SHM_OPTS} \ 2025-03-17T17:45:16.1219673Z  --tty \ 2025-03-17T17:45:16.1219932Z  --detach \ 2025-03-17T17:45:16.1220224Z  --name="${container_name}" \ 2025-03-17T17:45:16.1220566Z  ${JENKINS_USER} \ 2025-03-17T17:45:16.1220944Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2025-03-17T17:45:16.1221376Z  -w /var/lib/jenkins/workspace \ 2025-03-17T17:45:16.1221727Z  "${DOCKER_IMAGE}" \ 2025-03-17T17:45:16.1222038Z  ${DOCKER_SHELL_CMD} 2025-03-17T17:45:16.1222336Z ) 2025-03-17T17:45:16.1222652Z # Propagate download.pytorch.org IP to container 2025-03-17T17:45:16.1223366Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2025-03-17T17:45:16.1224128Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2025-03-17T17:45:16.1224577Z  2025-03-17T17:45:16.1224864Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-03-17T17:45:16.1225482Z  docker exec -t "${container_name}" sh -c "python3 -m pip install -r .ci/docker/requirements-ci.txt" 2025-03-17T17:45:16.1226042Z fi 2025-03-17T17:45:16.1226383Z  2025-03-17T17:45:16.1226915Z docker exec -t "${container_name}" sh -c "python3 -m pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2025-03-17T17:45:16.1232388Z shell: /usr/bin/bash -e {0} 2025-03-17T17:45:16.1232686Z env: 2025-03-17T17:45:16.1232926Z GIT_DEFAULT_BRANCH: main 2025-03-17T17:45:16.1233276Z BUILD_ENVIRONMENT: linux-focal-py3.13-clang10 2025-03-17T17:45:16.1233630Z PR_NUMBER: 148585 2025-03-17T17:45:16.1233911Z GITHUB_REPOSITORY: pytorch/pytorch 2025-03-17T17:45:16.1234245Z GITHUB_WORKFLOW: pull 2025-03-17T17:45:16.1234518Z GITHUB_JOB: test 2025-03-17T17:45:16.1234777Z GITHUB_RUN_ID: 13905937446 2025-03-17T17:45:16.1235165Z GITHUB_RUN_NUMBER: 299617 2025-03-17T17:45:16.1235466Z GITHUB_RUN_ATTEMPT: 1 2025-03-17T17:45:16.1235735Z JOB_ID: 38909654187 2025-03-17T17:45:16.1236167Z JOB_NAME: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:16.1236671Z BRANCH: pull/148585 2025-03-17T17:45:16.1237205Z SHA1: 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:45:16.1237605Z BASE_SHA: 7d50234dff8a52633fd546660a133b6f1ab443a9 2025-03-17T17:45:16.1237983Z TEST_CONFIG: dynamo_wrapped 2025-03-17T17:45:16.1238284Z SHARD_NUMBER: 1 2025-03-17T17:45:16.1238536Z NUM_TEST_SHARDS: 3 2025-03-17T17:45:16.1238806Z REENABLED_ISSUES: 2025-03-17T17:45:16.1239081Z CONTINUE_THROUGH_ERROR: False 2025-03-17T17:45:16.1239379Z VERBOSE_TEST_LOGS: False 2025-03-17T17:45:16.1239672Z TEST_SHOWLOCALS: False 2025-03-17T17:45:16.1239956Z NO_TEST_TIMEOUT: False 2025-03-17T17:45:16.1240231Z NO_TD: False 2025-03-17T17:45:16.1240479Z TD_DISTRIBUTED: False 2025-03-17T17:45:16.1240817Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2025-03-17T17:45:16.1241207Z SCCACHE_REGION: us-east-1 2025-03-17T17:45:16.1241497Z SHM_SIZE: 1g 2025-03-17T17:45:16.1242193Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:45:16.1242957Z XLA_CUDA: 2025-03-17T17:45:16.1243344Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2025-03-17T17:45:16.1243837Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2025-03-17T17:45:16.1244184Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2025-03-17T17:45:16.1244512Z DASHBOARD_TAG: 2025-03-17T17:45:16.1245093Z HUGGING_FACE_HUB_TOKEN: *** 2025-03-17T17:45:16.1245542Z SCRIBE_GRAPHQL_ACCESS_TOKEN: *** 2025-03-17T17:45:16.1245868Z IS_A100_RUNNER: 0 2025-03-17T17:45:16.1246267Z ARTIFACTS_FILE_SUFFIX: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T17:45:16.1246741Z ##[endgroup] 2025-03-17T17:45:16.1268114Z + [[ dynamo_wrapped == \m\u\l\t\i\g\p\u ]] 2025-03-17T17:45:16.1268528Z + [[ linux-focal-py3.13-clang10 == *onnx* ]] 2025-03-17T17:45:16.1268901Z + TEST_COMMAND=.ci/pytorch/test.sh 2025-03-17T17:45:16.1271227Z ++ awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo 2025-03-17T17:45:16.1288969Z + TOTAL_AVAILABLE_MEMORY_IN_GB='14.244 ' 2025-03-17T17:45:16.1289416Z + TOTAL_MEMORY_WITH_SWAP=17 2025-03-17T17:45:16.1289821Z + [[ linux-focal-py3.13-clang10 == *\s\3\9\0\x* ]] 2025-03-17T17:45:16.1290201Z + SHM_OPTS=--shm-size=1g 2025-03-17T17:45:16.1290501Z + JENKINS_USER='--user jenkins' 2025-03-17T17:45:16.1290805Z + DOCKER_SHELL_CMD= 2025-03-17T17:45:16.1296824Z +++ nproc --ignore=2 2025-03-17T17:45:16.1320203Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e TD_DISTRIBUTED -e PR_LABELS -e MAX_JOBS=6 -e SCCACHE_BUCKET -e SCCACHE_REGION -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 -e HUGGING_FACE_HUB_TOKEN -e SCRIBE_GRAPHQL_ACCESS_TOKEN -e DASHBOARD_TAG -e IS_A100_RUNNER -e ARTIFACTS_FILE_SUFFIX --memory=14g --memory-swap=17g --env-file=/tmp/github_env_13905937446 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=1g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T17:45:24.4763107Z + container_name=aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T17:45:24.4765167Z + grep download.pytorch.org /etc/hosts 2025-03-17T17:45:24.4766374Z + docker exec -i aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 sudo bash -c '/bin/cat >> /etc/hosts' 2025-03-17T17:45:24.6277701Z + echo DOCKER_CONTAINER_ID=aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T17:45:24.6278358Z + [[ linux-focal-py3.13-clang10 == *\s\3\9\0\x* ]] 2025-03-17T17:45:24.6280499Z ++ echo dist/torch-2.7.0a0+git52b8690-cp313-cp313-linux_x86_64.whl 2025-03-17T17:45:24.6282658Z + docker exec -t aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 sh -c 'python3 -m pip install dist/torch-2.7.0a0+git52b8690-cp313-cp313-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2025-03-17T17:45:25.1370591Z Processing ./dist/torch-2.7.0a0+git52b8690-cp313-cp313-linux_x86_64.whl (from torch==2.7.0a0+git52b8690) 2025-03-17T17:45:25.5640295Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (3.16.1) 2025-03-17T17:45:25.5644567Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (4.12.2) 2025-03-17T17:45:25.5657246Z Requirement already satisfied: setuptools in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (75.8.0) 2025-03-17T17:45:25.5661817Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (1.13.3) 2025-03-17T17:45:25.5665449Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (2.8.8) 2025-03-17T17:45:25.5670004Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (3.1.6) 2025-03-17T17:45:25.5673950Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (2024.10.0) 2025-03-17T17:45:25.5692785Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (3.3.0) 2025-03-17T17:45:25.5707719Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from opt-einsum>=3.3->torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (2.1.2) 2025-03-17T17:45:25.5722858Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from sympy>=1.13.3->torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (1.3.0) 2025-03-17T17:45:25.5827773Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from jinja2->torch==2.7.0a0+git52b8690->torch==2.7.0a0+git52b8690) (3.0.2) 2025-03-17T17:45:25.7785621Z Installing collected packages: torch 2025-03-17T17:45:35.8041024Z Successfully installed torch-2.7.0a0+git52b8690 2025-03-17T17:45:35.9032659Z + export TERM=vt100 2025-03-17T17:45:35.9033148Z + TERM=vt100 2025-03-17T17:45:35.9034662Z ++ dirname .ci/pytorch/test.sh 2025-03-17T17:45:35.9050881Z + source .ci/pytorch/common.sh 2025-03-17T17:45:35.9057020Z +++ dirname .ci/pytorch/common.sh 2025-03-17T17:45:35.9063815Z ++ source .ci/pytorch/common_utils.sh 2025-03-17T17:45:35.9068912Z +++ declare -f -t trap_add 2025-03-17T17:45:35.9074942Z ++ set -ex -o pipefail 2025-03-17T17:45:35.9075575Z ++ [[ linux-focal-py3.13-clang10 == *rocm* ]] 2025-03-17T17:45:35.9076173Z ++ BUILD_TEST_LIBTORCH=0 2025-03-17T17:45:35.9076523Z + [[ linux-focal-py3.13-clang10 != *rocm* ]] 2025-03-17T17:45:35.9076967Z + [[ linux-focal-py3.13-clang10 != *s390x* ]] 2025-03-17T17:45:35.9077338Z + [[ -d /var/lib/jenkins/workspace ]] 2025-03-17T17:45:35.9079266Z ++ stat -c %u /var/lib/jenkins/workspace 2025-03-17T17:45:35.9124326Z + WORKSPACE_ORIGINAL_OWNER_ID=1000 2025-03-17T17:45:35.9124736Z + trap_add cleanup_workspace EXIT 2025-03-17T17:45:35.9125086Z + trap_add_cmd=cleanup_workspace 2025-03-17T17:45:35.9125396Z + shift 2025-03-17T17:45:35.9125642Z + for trap_add_name in "$@" 2025-03-17T17:45:35.9129723Z +++ trap -p EXIT 2025-03-17T17:45:35.9132722Z ++ eval 'extract_trap_cmd ' 2025-03-17T17:45:35.9133264Z +++ extract_trap_cmd 2025-03-17T17:45:35.9133538Z +++ printf '%s\n' '' 2025-03-17T17:45:35.9133823Z ++ printf '%s\n' cleanup_workspace 2025-03-17T17:45:35.9134810Z + trap -- ' 2025-03-17T17:45:35.9135088Z cleanup_workspace' EXIT 2025-03-17T17:45:35.9135453Z + sudo chown -R jenkins /var/lib/jenkins/workspace 2025-03-17T17:45:36.4270791Z + git config --global --add safe.directory /var/lib/jenkins/workspace 2025-03-17T17:45:36.4459259Z + echo 'Environment variables:' 2025-03-17T17:45:36.4459812Z Environment variables: 2025-03-17T17:45:36.4460127Z + env 2025-03-17T17:45:36.4478172Z INSTALLED_DB=yes 2025-03-17T17:45:36.4478931Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-03-17T17:45:36.4479706Z CONTINUE_THROUGH_ERROR=False 2025-03-17T17:45:36.4480060Z BUILD_ENVIRONMENT=linux-focal-py3.13-clang10 2025-03-17T17:45:36.4480423Z HOSTNAME=aa508029845b 2025-03-17T17:45:36.4481043Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4481711Z GITHUB_ACTION=__self 2025-03-17T17:45:36.4481998Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-03-17T17:45:36.4482328Z GITHUB_RUN_NUMBER=299617 2025-03-17T17:45:36.4482661Z TEST_CONFIG=dynamo_wrapped 2025-03-17T17:45:36.4483904Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-03-17T17:45:36.4484264Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-03-17T17:45:36.4484600Z IS_A100_RUNNER=0 2025-03-17T17:45:36.4485123Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-03-17T17:45:36.4485450Z GITHUB_TRIGGERING_ACTOR=fadara01 2025-03-17T17:45:36.4485769Z GITHUB_REF_TYPE=branch 2025-03-17T17:45:36.4486067Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-03-17T17:45:36.4486408Z BASE_SHA=7d50234dff8a52633fd546660a133b6f1ab443a9 2025-03-17T17:45:36.4486763Z XLA_CUDA= 2025-03-17T17:45:36.4487110Z HUGGING_FACE_HUB_TOKEN=*** 2025-03-17T17:45:36.4487489Z *** 2025-03-17T17:45:36.4487727Z GITHUB_REPOSITORY_ID=65600975 2025-03-17T17:45:36.4488033Z GITHUB_ACTIONS=true 2025-03-17T17:45:36.4488316Z SHA1=52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:45:36.4488720Z GITHUB_SHA=4c2bc68c957f2652a5ff3ab9ed69449972fbd9e1 2025-03-17T17:45:36.4489352Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/pull.yml@refs/pull/148585/merge 2025-03-17T17:45:36.4489896Z UCC_HOME=/usr 2025-03-17T17:45:36.4490154Z VERBOSE_TEST_LOGS=False 2025-03-17T17:45:36.4490430Z GITHUB_REF=refs/pull/148585/merge 2025-03-17T17:45:36.4490741Z SHARD_NUMBER=1 2025-03-17T17:45:36.4491000Z GITHUB_REF_PROTECTED=false 2025-03-17T17:45:36.4491289Z HOME=/var/lib/jenkins 2025-03-17T17:45:36.4491590Z GITHUB_API_URL=https://api.github.com 2025-03-17T17:45:36.4491952Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-03-17T17:45:36.4492270Z UCX_COMMIT= 2025-03-17T17:45:36.4492505Z NUM_TEST_SHARDS=3 2025-03-17T17:45:36.4492753Z UCX_HOME=/usr 2025-03-17T17:45:36.4493353Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4494203Z JOB_NAME=linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:36.4495029Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4495893Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-03-17T17:45:36.4496455Z GITHUB_EVENT_NAME=pull_request 2025-03-17T17:45:36.4496767Z DASHBOARD_TAG= 2025-03-17T17:45:36.4497033Z GITHUB_RUN_ID=13905937446 2025-03-17T17:45:36.4497712Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4498581Z GITHUB_ACTOR=fadara01 2025-03-17T17:45:36.4498859Z PR_NUMBER=148585 2025-03-17T17:45:36.4499101Z DESIRED_CUDA= 2025-03-17T17:45:36.4499346Z GITHUB_RUN_ATTEMPT=1 2025-03-17T17:45:36.4499629Z ANACONDA_PYTHON_VERSION=3.13 2025-03-17T17:45:36.4499998Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-03-17T17:45:36.4500374Z TERM=vt100 2025-03-17T17:45:36.4500618Z INSTALLED_VISION=yes 2025-03-17T17:45:36.4500887Z BRANCH=pull/148585 2025-03-17T17:45:36.4501155Z SCCACHE_REGION=us-east-1 2025-03-17T17:45:36.4501457Z OPENSSL_ROOT_DIR=/opt/openssl 2025-03-17T17:45:36.4501768Z CUDA_PATH=/usr/local/cuda 2025-03-17T17:45:36.4502338Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-03-17T17:45:36.4502956Z GITHUB_SERVER_URL=https://github.com 2025-03-17T17:45:36.4503281Z UCC_COMMIT= 2025-03-17T17:45:36.4503517Z REENABLED_ISSUES= 2025-03-17T17:45:36.4503765Z DOCS= 2025-03-17T17:45:36.4503973Z SHLVL=1 2025-03-17T17:45:36.4504199Z MAX_JOBS=6 2025-03-17T17:45:36.4504445Z GITHUB_ACTOR_ID=115173828 2025-03-17T17:45:36.4504819Z GITHUB_WORKFLOW_SHA=4c2bc68c957f2652a5ff3ab9ed69449972fbd9e1 2025-03-17T17:45:36.4505240Z GITHUB_REF_NAME=148585/merge 2025-03-17T17:45:36.4505674Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-03-17T17:45:36.4506231Z GITHUB_JOB=test 2025-03-17T17:45:36.4506493Z NO_TEST_TIMEOUT=False 2025-03-17T17:45:36.4506772Z TD_DISTRIBUTED=False 2025-03-17T17:45:36.4507065Z GITHUB_REPOSITORY=pytorch/pytorch 2025-03-17T17:45:36.4507397Z GITHUB_RETENTION_DAYS=90 2025-03-17T17:45:36.4507695Z OPENSSL_DIR=/opt/openssl 2025-03-17T17:45:36.4508085Z GITHUB_ACTION_REPOSITORY= 2025-03-17T17:45:36.4508920Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-03-17T17:45:36.4509802Z GITHUB_BASE_REF=gh/fadara01/5/base 2025-03-17T17:45:36.4510126Z INSTALLED_ACL= 2025-03-17T17:45:36.4510512Z ARTIFACTS_FILE_SUFFIX=test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T17:45:36.4510978Z CI=true 2025-03-17T17:45:36.4511225Z GITHUB_REPOSITORY_OWNER=pytorch 2025-03-17T17:45:36.4511527Z JOB_ID=38909654187 2025-03-17T17:45:36.4511789Z INSTALLED_PROTOBUF=yes 2025-03-17T17:45:36.4512077Z GITHUB_HEAD_REF=gh/fadara01/5/head 2025-03-17T17:45:36.4512508Z GITHUB_ACTION_REF= 2025-03-17T17:45:36.4512827Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-03-17T17:45:36.4513214Z TEST_SHOWLOCALS=False 2025-03-17T17:45:36.4513490Z GITHUB_WORKFLOW=pull 2025-03-17T17:45:36.4513778Z DEBIAN_FRONTEND=noninteractive 2025-03-17T17:45:36.4514446Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4515128Z NO_TD=False 2025-03-17T17:45:36.4515381Z SKIP_SCCACHE_INITIALIZATION=1 2025-03-17T17:45:36.4515679Z _=/usr/bin/env 2025-03-17T17:45:36.4516018Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2025-03-17T17:45:36.4627111Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch 2025-03-17T17:45:36.4627845Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/bin 2025-03-17T17:45:36.4628649Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib 2025-03-17T17:45:36.4629563Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/test 2025-03-17T17:45:36.4630230Z + BUILD_DIR=build 2025-03-17T17:45:36.4630508Z + BUILD_RENAMED_DIR=build_renamed 2025-03-17T17:45:36.4630835Z + BUILD_BIN_DIR=build/bin 2025-03-17T17:45:36.4631119Z + SHARD_NUMBER=1 2025-03-17T17:45:36.4631389Z + NUM_TEST_SHARDS=3 2025-03-17T17:45:36.4631670Z + export TORCH_SERIALIZATION_DEBUG=1 2025-03-17T17:45:36.4632008Z + TORCH_SERIALIZATION_DEBUG=1 2025-03-17T17:45:36.4632311Z + export VALGRIND=ON 2025-03-17T17:45:36.4632574Z + VALGRIND=ON 2025-03-17T17:45:36.4632866Z + [[ linux-focal-py3.13-clang10 == *clang9* ]] 2025-03-17T17:45:36.4633307Z + [[ linux-focal-py3.13-clang10 == *xpu* ]] 2025-03-17T17:45:36.4634071Z + [[ linux-focal-py3.13-clang10 == *s390x* ]] 2025-03-17T17:45:36.4634735Z + [[ 0 == \1 ]] 2025-03-17T17:45:36.4634981Z + [[ False == \1 ]] 2025-03-17T17:45:36.4635272Z + [[ linux-focal-py3.13-clang10 != *bazel* ]] 2025-03-17T17:45:36.4635649Z ++ realpath build/custom_test_artifacts 2025-03-17T17:45:36.4654153Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2025-03-17T17:45:36.4654721Z + [[ -n '' ]] 2025-03-17T17:45:36.4654976Z + echo 'Environment variables' 2025-03-17T17:45:36.4655292Z Environment variables 2025-03-17T17:45:36.4655559Z + env 2025-03-17T17:45:36.4660697Z INSTALLED_DB=yes 2025-03-17T17:45:36.4661412Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-03-17T17:45:36.4662115Z CONTINUE_THROUGH_ERROR=False 2025-03-17T17:45:36.4662628Z BUILD_ENVIRONMENT=linux-focal-py3.13-clang10 2025-03-17T17:45:36.4663006Z HOSTNAME=aa508029845b 2025-03-17T17:45:36.4663796Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4664498Z GITHUB_ACTION=__self 2025-03-17T17:45:36.4664833Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-03-17T17:45:36.4665166Z GITHUB_RUN_NUMBER=299617 2025-03-17T17:45:36.4665449Z TEST_CONFIG=dynamo_wrapped 2025-03-17T17:45:36.4665761Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-03-17T17:45:36.4666174Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-03-17T17:45:36.4666513Z IS_A100_RUNNER=0 2025-03-17T17:45:36.4667030Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-03-17T17:45:36.4667352Z GITHUB_TRIGGERING_ACTOR=fadara01 2025-03-17T17:45:36.4667709Z GITHUB_REF_TYPE=branch 2025-03-17T17:45:36.4668179Z TORCH_CUDA_ARCH_LIST=Maxwell 2025-03-17T17:45:36.4668516Z BASE_SHA=7d50234dff8a52633fd546660a133b6f1ab443a9 2025-03-17T17:45:36.4668873Z XLA_CUDA= 2025-03-17T17:45:36.4669231Z HUGGING_FACE_HUB_TOKEN=*** 2025-03-17T17:45:36.4669736Z *** 2025-03-17T17:45:36.4669984Z GITHUB_REPOSITORY_ID=65600975 2025-03-17T17:45:36.4670293Z GITHUB_ACTIONS=true 2025-03-17T17:45:36.4670598Z SHA1=52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T17:45:36.4671057Z GITHUB_SHA=4c2bc68c957f2652a5ff3ab9ed69449972fbd9e1 2025-03-17T17:45:36.4671635Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/pull.yml@refs/pull/148585/merge 2025-03-17T17:45:36.4672163Z UCC_HOME=/usr 2025-03-17T17:45:36.4672424Z TORCH_SERIALIZATION_DEBUG=1 2025-03-17T17:45:36.4672723Z VERBOSE_TEST_LOGS=False 2025-03-17T17:45:36.4673009Z GITHUB_REF=refs/pull/148585/merge 2025-03-17T17:45:36.4673316Z SHARD_NUMBER=1 2025-03-17T17:45:36.4673576Z GITHUB_REF_PROTECTED=false 2025-03-17T17:45:36.4673867Z HOME=/var/lib/jenkins 2025-03-17T17:45:36.4674179Z GITHUB_API_URL=https://api.github.com 2025-03-17T17:45:36.4674528Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-03-17T17:45:36.4674846Z UCX_COMMIT= 2025-03-17T17:45:36.4675079Z NUM_TEST_SHARDS=3 2025-03-17T17:45:36.4675332Z UCX_HOME=/usr 2025-03-17T17:45:36.4675937Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4676780Z JOB_NAME=linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T17:45:36.4677597Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4678519Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-03-17T17:45:36.4679069Z GITHUB_EVENT_NAME=pull_request 2025-03-17T17:45:36.4679374Z DASHBOARD_TAG= 2025-03-17T17:45:36.4679628Z GITHUB_RUN_ID=13905937446 2025-03-17T17:45:36.4680298Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4681018Z GITHUB_ACTOR=fadara01 2025-03-17T17:45:36.4681287Z PR_NUMBER=148585 2025-03-17T17:45:36.4681539Z DESIRED_CUDA= 2025-03-17T17:45:36.4681786Z GITHUB_RUN_ATTEMPT=1 2025-03-17T17:45:36.4682048Z VALGRIND=ON 2025-03-17T17:45:36.4682298Z ANACONDA_PYTHON_VERSION=3.13 2025-03-17T17:45:36.4682795Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-03-17T17:45:36.4683166Z TERM=vt100 2025-03-17T17:45:36.4683406Z INSTALLED_VISION=yes 2025-03-17T17:45:36.4683675Z BRANCH=pull/148585 2025-03-17T17:45:36.4683945Z SCCACHE_REGION=us-east-1 2025-03-17T17:45:36.4684245Z OPENSSL_ROOT_DIR=/opt/openssl 2025-03-17T17:45:36.4684558Z CUDA_PATH=/usr/local/cuda 2025-03-17T17:45:36.4685117Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-03-17T17:45:36.4685738Z GITHUB_SERVER_URL=https://github.com 2025-03-17T17:45:36.4686141Z UCC_COMMIT= 2025-03-17T17:45:36.4686382Z REENABLED_ISSUES= 2025-03-17T17:45:36.4686628Z DOCS= 2025-03-17T17:45:36.4686849Z SHLVL=1 2025-03-17T17:45:36.4687073Z MAX_JOBS=6 2025-03-17T17:45:36.4687317Z GITHUB_ACTOR_ID=115173828 2025-03-17T17:45:36.4687683Z GITHUB_WORKFLOW_SHA=4c2bc68c957f2652a5ff3ab9ed69449972fbd9e1 2025-03-17T17:45:36.4688107Z GITHUB_REF_NAME=148585/merge 2025-03-17T17:45:36.4688547Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-03-17T17:45:36.4689009Z GITHUB_JOB=test 2025-03-17T17:45:36.4689264Z NO_TEST_TIMEOUT=False 2025-03-17T17:45:36.4689536Z TD_DISTRIBUTED=False 2025-03-17T17:45:36.4689824Z GITHUB_REPOSITORY=pytorch/pytorch 2025-03-17T17:45:36.4690151Z GITHUB_RETENTION_DAYS=90 2025-03-17T17:45:36.4690442Z OPENSSL_DIR=/opt/openssl 2025-03-17T17:45:36.4690734Z GITHUB_ACTION_REPOSITORY= 2025-03-17T17:45:36.4691636Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-03-17T17:45:36.4692602Z GITHUB_BASE_REF=gh/fadara01/5/base 2025-03-17T17:45:36.4692928Z INSTALLED_ACL= 2025-03-17T17:45:36.4693331Z ARTIFACTS_FILE_SUFFIX=test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T17:45:36.4693797Z CI=true 2025-03-17T17:45:36.4694050Z GITHUB_REPOSITORY_OWNER=pytorch 2025-03-17T17:45:36.4694354Z JOB_ID=38909654187 2025-03-17T17:45:36.4694602Z INSTALLED_PROTOBUF=yes 2025-03-17T17:45:36.4694900Z GITHUB_HEAD_REF=gh/fadara01/5/head 2025-03-17T17:45:36.4695219Z GITHUB_ACTION_REF= 2025-03-17T17:45:36.4695541Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-03-17T17:45:36.4695922Z TEST_SHOWLOCALS=False 2025-03-17T17:45:36.4696202Z GITHUB_WORKFLOW=pull 2025-03-17T17:45:36.4696486Z DEBIAN_FRONTEND=noninteractive 2025-03-17T17:45:36.4697153Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_7d730cf9-2740-42a7-8567-6ab389641b90 2025-03-17T17:45:36.4697835Z NO_TD=False 2025-03-17T17:45:36.4698091Z SKIP_SCCACHE_INITIALIZATION=1 2025-03-17T17:45:36.4698401Z _=/usr/bin/env 2025-03-17T17:45:36.4698655Z + echo 'Testing pytorch' 2025-03-17T17:45:36.4698933Z Testing pytorch 2025-03-17T17:45:36.4699300Z + export LANG=C.UTF-8 2025-03-17T17:45:36.4699567Z + LANG=C.UTF-8 2025-03-17T17:45:36.4734402Z + PR_NUMBER=148585 2025-03-17T17:45:36.4734937Z + [[ dynamo_wrapped == \d\e\f\a\u\l\t ]] 2025-03-17T17:45:36.4735521Z + [[ dynamo_wrapped == \d\i\s\t\r\i\b\u\t\e\d ]] 2025-03-17T17:45:36.4736220Z + [[ dynamo_wrapped == \s\l\o\w ]] 2025-03-17T17:45:36.4737061Z + [[ linux-focal-py3.13-clang10 == *slow-gradcheck* ]] 2025-03-17T17:45:36.4737805Z + [[ linux-focal-py3.13-clang10 == *cuda* ]] 2025-03-17T17:45:36.4738474Z + [[ linux-focal-py3.13-clang10 == *rocm* ]] 2025-03-17T17:45:36.4738848Z + [[ linux-focal-py3.13-clang10 == *xpu* ]] 2025-03-17T17:45:36.4739208Z + [[ dynamo_wrapped == *crossref* ]] 2025-03-17T17:45:36.4739559Z + [[ linux-focal-py3.13-clang10 == *rocm* ]] 2025-03-17T17:45:36.4739944Z + [[ linux-focal-py3.13-clang10 == *xpu* ]] 2025-03-17T17:45:36.4740343Z + [[ linux-focal-py3.13-clang10 != *-bazel-* ]] 2025-03-17T17:45:36.4740721Z + pip_install --user ninja==1.10.2 2025-03-17T17:45:36.4741138Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-03-17T17:45:36.4741662Z + python3 -m pip install --progress-bar off --user ninja==1.10.2 2025-03-17T17:45:37.0037848Z Collecting ninja==1.10.2 2025-03-17T17:45:37.0379515Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2025-03-17T17:45:37.0485482Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2025-03-17T17:45:37.2225122Z Installing collected packages: ninja 2025-03-17T17:45:37.2298862Z  WARNING: The script ninja is installed in '/var/lib/jenkins/.local/bin' which is not on PATH. 2025-03-17T17:45:37.2299910Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2025-03-17T17:45:37.2365848Z Successfully installed ninja-1.10.2 2025-03-17T17:45:37.3243266Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-03-17T17:45:37.3244995Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-03-17T17:45:37.3246018Z + [[ linux-focal-py3.13-clang10 == *aarch64* ]] 2025-03-17T17:45:37.3246386Z + install_tlparse 2025-03-17T17:45:37.3246677Z + pip_install --user tlparse==0.3.30 2025-03-17T17:45:37.3247108Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-03-17T17:45:37.3247642Z + python3 -m pip install --progress-bar off --user tlparse==0.3.30 2025-03-17T17:45:37.7524496Z Collecting tlparse==0.3.30 2025-03-17T17:45:37.7860936Z Downloading tlparse-0.3.30-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.9 kB) 2025-03-17T17:45:37.7968153Z Downloading tlparse-0.3.30-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB) 2025-03-17T17:45:37.9980802Z Installing collected packages: tlparse 2025-03-17T17:45:38.0331231Z Successfully installed tlparse-0.3.30 2025-03-17T17:45:38.1222371Z ++ python -m site --user-base 2025-03-17T17:45:38.1368406Z + PATH=/var/lib/jenkins/.local/bin:/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.13/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-03-17T17:45:38.1370005Z + [[ linux-focal-py3.13-clang10 == *asan* ]] 2025-03-17T17:45:38.1370554Z + [[ linux-focal-py3.13-clang10 == *-debug* ]] 2025-03-17T17:45:38.1370955Z + [[ linux-focal-py3.13-clang10 != *-bazel-* ]] 2025-03-17T17:45:38.1371514Z + echo 'We are not in debug mode: linux-focal-py3.13-clang10. Expect the assertion to pass' 2025-03-17T17:45:38.1372202Z We are not in debug mode: linux-focal-py3.13-clang10. Expect the assertion to pass 2025-03-17T17:45:38.1373694Z + cd test 2025-03-17T17:45:38.1374443Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2025-03-17T17:45:39.5329643Z + [[ dynamo_wrapped == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2025-03-17T17:45:39.5330105Z + [[ dynamo_wrapped == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2025-03-17T17:45:39.5335209Z + DYNAMO_BENCHMARK_FLAGS=() 2025-03-17T17:45:39.5336881Z + [[ dynamo_wrapped == *pr_time_benchmarks* ]] 2025-03-17T17:45:39.5337362Z + [[ dynamo_wrapped == *dynamo_eager* ]] 2025-03-17T17:45:39.5337727Z + [[ dynamo_wrapped == *aot_eager* ]] 2025-03-17T17:45:39.5338123Z + [[ dynamo_wrapped == *aot_inductor* ]] 2025-03-17T17:45:39.5338473Z + [[ dynamo_wrapped == *inductor* ]] 2025-03-17T17:45:39.5338866Z + [[ dynamo_wrapped == *dynamic* ]] 2025-03-17T17:45:39.5339199Z + [[ dynamo_wrapped == *cpu* ]] 2025-03-17T17:45:39.5339599Z + DYNAMO_BENCHMARK_FLAGS+=(--device cuda) 2025-03-17T17:45:39.5370060Z + [[ linux-focal-py3.13-clang10 == *libtorch* ]] 2025-03-17T17:45:39.5370486Z + [[ linux-focal-py3.13-clang10 == *-bazel-* ]] 2025-03-17T17:45:39.5373624Z + cd test 2025-03-17T17:45:39.5374129Z + python -c 'import torch; print(torch.__config__.show())' 2025-03-17T17:45:40.6739079Z PyTorch built with: 2025-03-17T17:45:40.6739412Z - GCC 4.2 2025-03-17T17:45:40.6739674Z - C++ Version: 201703 2025-03-17T17:45:40.6739975Z - clang 10.0.0 2025-03-17T17:45:40.6740849Z - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2025-03-17T17:45:40.6741631Z - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-03-17T17:45:40.6742118Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-03-17T17:45:40.6742498Z - LAPACK is enabled (usually provided by MKL) 2025-03-17T17:45:40.6742852Z - NNPACK is enabled 2025-03-17T17:45:40.6743146Z - CPU capability usage: AVX512 2025-03-17T17:45:40.6748682Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=52b86900e894e6b34d880548ab6883b3d9207fb6, CXX_COMPILER=/opt/cache/bin/clang++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=braced-scalar-init -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wvla-extension -Wnewline-eof -Winconsistent-missing-override -Winconsistent-missing-destructor-override -Wno-pass-failed -Wno-error=old-style-cast -Wconstant-conversion -Qunused-arguments -fcolor-diagnostics -faligned-new -Werror -fno-math-errno -fno-trapping-math -Werror=format, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.7.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 2025-03-17T17:45:40.6754422Z 2025-03-17T17:45:40.9060856Z + cd test 2025-03-17T17:45:40.9061282Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2025-03-17T17:45:42.0344075Z ATen/Parallel: 2025-03-17T17:45:42.0344572Z at::get_num_threads() : 4 2025-03-17T17:45:42.0345096Z at::get_num_interop_threads() : 4 2025-03-17T17:45:42.0345591Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-03-17T17:45:42.0346162Z omp_get_max_threads() : 4 2025-03-17T17:45:42.0347079Z Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2025-03-17T17:45:42.0348058Z mkl_get_max_threads() : 4 2025-03-17T17:45:42.0348680Z Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-03-17T17:45:42.0349410Z std::thread::hardware_concurrency() : 8 2025-03-17T17:45:42.0349925Z Environment variables: 2025-03-17T17:45:42.0350344Z OMP_NUM_THREADS : [not set] 2025-03-17T17:45:42.0350814Z MKL_NUM_THREADS : [not set] 2025-03-17T17:45:42.0351261Z ATen parallel backend: OpenMP 2025-03-17T17:45:42.0351577Z 2025-03-17T17:45:42.2649330Z + [[ dynamo_wrapped == *numpy_2* ]] 2025-03-17T17:45:42.2649801Z + [[ linux-focal-py3.13-clang10 == *aarch64* ]] 2025-03-17T17:45:42.2650183Z + [[ dynamo_wrapped == *backward* ]] 2025-03-17T17:45:42.2650551Z + [[ dynamo_wrapped == *xla* ]] 2025-03-17T17:45:42.2650866Z + [[ dynamo_wrapped == *executorch* ]] 2025-03-17T17:45:42.2651232Z + [[ dynamo_wrapped == \j\i\t\_\l\e\g\a\c\y ]] 2025-03-17T17:45:42.2651621Z + [[ linux-focal-py3.13-clang10 == *libtorch* ]] 2025-03-17T17:45:42.2652007Z + [[ dynamo_wrapped == distributed ]] 2025-03-17T17:45:42.2652375Z + [[ dynamo_wrapped == *inductor_distributed* ]] 2025-03-17T17:45:42.2652760Z + [[ dynamo_wrapped == *inductor-halide* ]] 2025-03-17T17:45:42.2653145Z + [[ dynamo_wrapped == *inductor-triton-cpu* ]] 2025-03-17T17:45:42.2653698Z + [[ dynamo_wrapped == *inductor-micro-benchmark* ]] 2025-03-17T17:45:42.2654202Z + [[ dynamo_wrapped == *huggingface* ]] 2025-03-17T17:45:42.2654552Z + [[ dynamo_wrapped == *timm* ]] 2025-03-17T17:45:42.2654874Z + [[ dynamo_wrapped == cachebench ]] 2025-03-17T17:45:42.2655223Z + [[ dynamo_wrapped == verify_cachebench ]] 2025-03-17T17:45:42.2655584Z + [[ dynamo_wrapped == *torchbench* ]] 2025-03-17T17:45:42.2655951Z + [[ dynamo_wrapped == *inductor_cpp_wrapper* ]] 2025-03-17T17:45:42.2656600Z + [[ dynamo_wrapped == *inductor* ]] 2025-03-17T17:45:42.2656953Z + [[ dynamo_wrapped == *dynamo_wrapped* ]] 2025-03-17T17:45:42.2657298Z + install_torchvision 2025-03-17T17:45:42.2657591Z + local orig_preload 2025-03-17T17:45:42.2657846Z + local commit 2025-03-17T17:45:42.2658105Z ++ get_pinned_commit vision 2025-03-17T17:45:42.2658424Z ++ cat .github/ci_commit_pins/vision.txt 2025-03-17T17:45:42.2676827Z + commit=d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:42.2677564Z + orig_preload= 2025-03-17T17:45:42.2677895Z + '[' -n '' ']' 2025-03-17T17:45:42.2678528Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:42.2679288Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-03-17T17:45:42.2680153Z + python3 -m pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:42.6362690Z Collecting git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:42.6367073Z Cloning https://github.com/pytorch/vision.git (to revision d23a6e1664d20707c11781299611436e1f0c104f) to /tmp/pip-req-build-0wqkzuce 2025-03-17T17:45:42.6390161Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/vision.git /tmp/pip-req-build-0wqkzuce 2025-03-17T17:45:44.1575901Z Running command git rev-parse -q --verify 'sha^d23a6e1664d20707c11781299611436e1f0c104f' 2025-03-17T17:45:44.1596616Z Running command git fetch -q https://github.com/pytorch/vision.git d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:45.5384284Z Running command git checkout -q d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:45.8613670Z Resolved https://github.com/pytorch/vision.git to commit d23a6e1664d20707c11781299611436e1f0c104f 2025-03-17T17:45:47.8120612Z Preparing metadata (setup.py) ... [?25l- \ done 2025-03-17T17:45:47.8152845Z [?25hRequirement already satisfied: numpy in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torchvision==0.19.0a0+d23a6e1) (2.1.2) 2025-03-17T17:45:47.8156137Z Requirement already satisfied: torch in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torchvision==0.19.0a0+d23a6e1) (2.7.0a0+git52b8690) 2025-03-17T17:45:47.8159845Z Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torchvision==0.19.0a0+d23a6e1) (11.0.0) 2025-03-17T17:45:47.8223964Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (3.16.1) 2025-03-17T17:45:47.8227536Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (4.12.2) 2025-03-17T17:45:47.8238059Z Requirement already satisfied: setuptools in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (75.8.0) 2025-03-17T17:45:47.8241918Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (1.13.3) 2025-03-17T17:45:47.8244714Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (2.8.8) 2025-03-17T17:45:47.8247476Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (3.1.6) 2025-03-17T17:45:47.8250190Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (2024.10.0) 2025-03-17T17:45:47.8263446Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from sympy>=1.13.3->torch->torchvision==0.19.0a0+d23a6e1) (1.3.0) 2025-03-17T17:45:47.8367387Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.13/lib/python3.13/site-packages (from jinja2->torch->torchvision==0.19.0a0+d23a6e1) (3.0.2) 2025-03-17T17:45:47.8466677Z Building wheels for collected packages: torchvision 2025-03-17T17:46:53.9648906Z Building wheel for torchvision (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | done 2025-03-17T17:46:53.9687729Z [?25h Created wheel for torchvision: filename=torchvision-0.19.0a0+d23a6e1-cp313-cp313-linux_x86_64.whl size=1162261 sha256=c409f68681145ca79783c45ad8b332a58da0474c4424f61511d0e33fc161aad5 2025-03-17T17:46:53.9689227Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/81/60/73/f2acb628a45eebe28dd9ff5468e774a0d5e194728570f8ff6f 2025-03-17T17:46:53.9726805Z Successfully built torchvision 2025-03-17T17:46:54.1089019Z Installing collected packages: torchvision 2025-03-17T17:46:54.5485533Z Successfully installed torchvision-0.19.0a0+d23a6e1 2025-03-17T17:46:54.6674946Z + '[' -n '' ']' 2025-03-17T17:46:54.6675350Z + test_dynamo_wrapped_shard 1 2025-03-17T17:46:54.6675663Z + [[ -z 3 ]] 2025-03-17T17:46:54.6675932Z + python tools/dynamo/verify_dynamo.py 2025-03-17T17:46:55.8624384Z Python version: 3.13.2 2025-03-17T17:46:55.8624734Z `torch` version: 2.7.0a0+git52b8690 2025-03-17T17:46:55.8625067Z CUDA version: None 2025-03-17T17:46:55.8625331Z ROCM version: None 2025-03-17T17:46:55.8625495Z 2025-03-17T17:46:55.8626047Z /var/lib/jenkins/workspace/tools/dynamo/verify_dynamo.py:220: UserWarning: Dynamo not yet supported in Python 3.13. Skipping check. 2025-03-17T17:46:55.8627326Z warnings.warn("Dynamo not yet supported in Python 3.13. Skipping check.") 2025-03-17T17:46:55.8627813Z All required checks passed 2025-03-17T17:46:56.0977344Z + python test/run_test.py --dynamo --exclude-inductor-tests --exclude-jit-executor --exclude-distributed-tests --exclude-torch-export-tests --exclude-aot-dispatch-tests --shard 1 3 --verbose --upload-artifacts-while-running 2025-03-17T17:46:56.2004701Z /var/lib/jenkins/workspace/test/run_test.py:24: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html 2025-03-17T17:46:56.2005636Z import pkg_resources 2025-03-17T17:47:00.7025588Z Downloading https://ossci-metrics.s3.amazonaws.com/disabled-tests-condensed.json to /var/lib/jenkins/workspace/test/.pytorch-disabled-tests.json 2025-03-17T17:47:00.7696326Z Ignoring disabled issues: [''] 2025-03-17T17:47:00.7870846Z Found test times from artifacts 2025-03-17T17:47:00.8562412Z Found test times from artifacts 2025-03-17T17:47:00.8585862Z Running all tests 2025-03-17T17:47:00.8700541Z Running parallel tests on 3 processes 2025-03-17T17:47:00.8704799Z Name: tests to run (est. time: 73.37min) 2025-03-17T17:47:00.8705453Z Serial tests (30): 2025-03-17T17:47:00.8705763Z test_native_mha 1/1 2025-03-17T17:47:00.8706052Z test_transformers_privateuse1 1/1 2025-03-17T17:47:00.8706525Z test_show_pickle 1/1 2025-03-17T17:47:00.8707088Z test_torch 1/1 2025-03-17T17:47:00.8707427Z test_ci_sanity_check_fail 1/1 2025-03-17T17:47:00.8707741Z test_fake_tensor 1/1 2025-03-17T17:47:00.8708027Z test_jit_disabled 1/1 2025-03-17T17:47:00.8708313Z test_autocast 1/1 2025-03-17T17:47:00.8708590Z test_python_dispatch 1/1 2025-03-17T17:47:00.8708909Z test_cpp_extensions_mtia_backend 1/1 2025-03-17T17:47:00.8709284Z test_autograd_fallback 1/1 2025-03-17T17:47:00.8709598Z test_multiprocessing 1/1 2025-03-17T17:47:00.8709931Z test_cpp_extensions_stream_and_event 1/1 2025-03-17T17:47:00.8710301Z test_tensor_creation_ops 1/1 2025-03-17T17:47:00.8710606Z test_nn 1/2 2025-03-17T17:47:00.8710857Z nn/test_pooling 1/1 2025-03-17T17:47:00.8711134Z test_overrides 1/1 2025-03-17T17:47:00.8711415Z test_cuda_nvml_based_avail 1/1 2025-03-17T17:47:00.8711749Z test_multiprocessing_spawn 1/1 2025-03-17T17:47:00.8712061Z test_reductions 1/4 2025-03-17T17:47:00.8712622Z test_reductions 2/4 2025-03-17T17:47:00.8712910Z test_reductions 3/4 2025-03-17T17:47:00.8713188Z test_reductions 4/4 2025-03-17T17:47:00.8713493Z distributions/test_distributions 1/2 2025-03-17T17:47:00.8713868Z distributions/test_distributions 2/2 2025-03-17T17:47:00.8714210Z doctests 1/1 2025-03-17T17:47:00.8714478Z test_autoload_disable 1/1 2025-03-17T17:47:00.8714786Z test_autoload_enable 1/1 2025-03-17T17:47:00.8715116Z test_cpp_extensions_aot_ninja 1/1 2025-03-17T17:47:00.8715469Z test_cpp_extensions_aot_no_ninja 1/1 2025-03-17T17:47:00.8715814Z Parallel tests (32): 2025-03-17T17:47:00.8716121Z dynamo/test_dynamic_shapes 1/1 2025-03-17T17:47:00.8716452Z dynamo/test_interop 1/1 2025-03-17T17:47:00.8716772Z test_appending_byte_serializer 1/1 2025-03-17T17:47:00.8717115Z dynamo/test_sdpa 1/1 2025-03-17T17:47:00.8717412Z dynamo/test_frame_init 1/1 2025-03-17T17:47:00.8717719Z dynamo/test_sys 1/1 2025-03-17T17:47:00.8718005Z dynamo/test_trace_rules 1/1 2025-03-17T17:47:00.8718324Z dynamo/test_config 1/1 2025-03-17T17:47:00.8718622Z test_jiterator 1/1 2025-03-17T17:47:00.8718903Z dynamo/test_sources 1/1 2025-03-17T17:47:00.8719206Z dynamo/test_optimizers 1/1 2025-03-17T17:47:00.8719512Z dynamo/test_metrics_context 1/1 2025-03-17T17:47:00.8719835Z xpu/test_conv 1/1 2025-03-17T17:47:00.8720130Z dynamo/test_python_dispatcher 1/1 2025-03-17T17:47:00.8720457Z test_hub 1/1 2025-03-17T17:47:00.8720720Z dynamo/test_flat_apply 1/1 2025-03-17T17:47:00.8721020Z xpu/test_gemm 1/1 2025-03-17T17:47:00.8721307Z dynamo/test_verify_correctness 1/1 2025-03-17T17:47:00.8721772Z test_cuda_expandable_segments 1/1 2025-03-17T17:47:00.8722113Z dynamo/test_debug_utils 1/1 2025-03-17T17:47:00.8722432Z dynamo/test_structured_trace 1/1 2025-03-17T17:47:00.8722758Z test_matmul_cuda 1/1 2025-03-17T17:47:00.8723050Z dynamo/test_aot_autograd 1/1 2025-03-17T17:47:00.8723372Z dynamo/test_higher_order_ops 1/1 2025-03-17T17:47:00.8723720Z dynamo/test_aot_autograd_cache 1/1 2025-03-17T17:47:00.8724057Z dynamo/test_exc 1/1 2025-03-17T17:47:00.8724334Z test_cuda_multigpu 1/1 2025-03-17T17:47:00.8724618Z dynamo/test_ctx_manager 1/1 2025-03-17T17:47:00.8724928Z dynamo/test_minifier 1/1 2025-03-17T17:47:00.8725234Z dynamo/test_reorder_logs 1/1 2025-03-17T17:47:00.8725543Z test_linalg 4/4 2025-03-17T17:47:00.8725818Z dynamo/test_python_autograd 1/1 2025-03-17T17:47:00.8726152Z Name: excluded (est. time: 0.0min) 2025-03-17T17:47:00.8726470Z Serial tests (0): 2025-03-17T17:47:00.8726733Z Parallel tests (0): 2025-03-17T17:47:00.8809804Z Running test_native_mha 1/1 ... [2025-03-17 17:47:00.880640] 2025-03-17T17:47:00.8810592Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:47:00.8814317Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_native_mha.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:47:00.881056] 2025-03-17T17:47:25.5791394Z 2025-03-17T17:47:25.5792531Z test_native_mha 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_native_mha_1.1_b0aa962e8bd653f2_.log 2025-03-17T17:47:25.5818801Z Running 28 items in this shard: test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_attention_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_encoder_decoder_attention_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_False_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_False_pad_all_False_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_False_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_False_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_False_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_True_fused_False_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_native_multihead_self_attention_use_nt_True_use_padding_True_pad_all_True_need_weights_False_average_attn_weights_True_fused_True_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_transform_bias_rescale_qkv_cpu_float32, test/test_native_mha.py::TestMHADeviceTypeCPU::test_transform_bias_rescale_qkv_nested_cpu_float32 2025-03-17T17:47:25.5840129Z 2025-03-17T17:47:25.5840373Z Running test_transformers_privateuse1 1/1 ... [2025-03-17 17:47:25.579533] 2025-03-17T17:47:27.0403938Z running install 2025-03-17T17:47:27.0413144Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T17:47:27.0414085Z !! 2025-03-17T17:47:27.0414208Z 2025-03-17T17:47:27.0414339Z ******************************************************************************** 2025-03-17T17:47:27.0414806Z Please avoid running ``setup.py`` directly. 2025-03-17T17:47:27.0415252Z Instead, use pypa/build, pypa/installer or other 2025-03-17T17:47:27.0415693Z standards-based tools. 2025-03-17T17:47:27.0415901Z 2025-03-17T17:47:27.0416271Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T17:47:27.0416872Z ******************************************************************************** 2025-03-17T17:47:27.0417137Z 2025-03-17T17:47:27.0417225Z !! 2025-03-17T17:47:27.0417460Z self.initialize_options() 2025-03-17T17:47:27.0542803Z running build 2025-03-17T17:47:27.0543260Z running build_py 2025-03-17T17:47:27.0619900Z creating build/lib.linux-x86_64-cpython-313/pytorch_openreg 2025-03-17T17:47:27.0621589Z copying pytorch_openreg/__init__.py -> build/lib.linux-x86_64-cpython-313/pytorch_openreg 2025-03-17T17:47:27.0628521Z copying pytorch_openreg/_aten_impl.py -> build/lib.linux-x86_64-cpython-313/pytorch_openreg 2025-03-17T17:47:27.0634319Z copying pytorch_openreg/_device_daemon.py -> build/lib.linux-x86_64-cpython-313/pytorch_openreg 2025-03-17T17:47:27.0645655Z copying pytorch_openreg/_meta_parser.py -> build/lib.linux-x86_64-cpython-313/pytorch_openreg 2025-03-17T17:47:27.0654782Z running build_ext 2025-03-17T17:47:27.2048169Z building 'pytorch_openreg._C' extension 2025-03-17T17:47:27.2051634Z creating /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc 2025-03-17T17:47:27.2340634Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T17:47:27.2341568Z Compiling objects... 2025-03-17T17:47:27.2341911Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T17:47:27.6460846Z [1/3] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/Module.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/Module.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/Module.o -g -Wall -Werror -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T17:47:27.6473528Z [2/3] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegHooks.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegHooks.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegHooks.o -g -Wall -Werror -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T17:47:27.6625149Z [3/3] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegMem.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegMem.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegMem.o -g -Wall -Werror -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T17:47:27.6679712Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/Module.o /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegHooks.o /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/build/temp.linux-x86_64-cpython-313/var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension/pytorch_openreg/csrc/OpenRegMem.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/pytorch_openreg/_C.so 2025-03-17T17:47:27.9237947Z running install_lib 2025-03-17T17:47:27.9315610Z creating install/opt/conda/envs/py_3.13/lib/python3.13/site-packages 2025-03-17T17:47:27.9318845Z creating install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9320364Z copying build/lib.linux-x86_64-cpython-313/pytorch_openreg/__init__.py -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9321892Z copying build/lib.linux-x86_64-cpython-313/pytorch_openreg/_aten_impl.py -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9323356Z copying build/lib.linux-x86_64-cpython-313/pytorch_openreg/_device_daemon.py -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9325180Z copying build/lib.linux-x86_64-cpython-313/pytorch_openreg/_meta_parser.py -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9326466Z copying build/lib.linux-x86_64-cpython-313/pytorch_openreg/_C.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg 2025-03-17T17:47:27.9372588Z byte-compiling ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg/__init__.py to __init__.cpython-313.pyc 2025-03-17T17:47:27.9375910Z byte-compiling ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg/_aten_impl.py to _aten_impl.cpython-313.pyc 2025-03-17T17:47:27.9392568Z byte-compiling ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg/_device_daemon.py to _device_daemon.cpython-313.pyc 2025-03-17T17:47:27.9421142Z byte-compiling ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg/_meta_parser.py to _meta_parser.cpython-313.pyc 2025-03-17T17:47:27.9429781Z running install_egg_info 2025-03-17T17:47:27.9599000Z running egg_info 2025-03-17T17:47:27.9667854Z creating pytorch_openreg.egg-info 2025-03-17T17:47:27.9668823Z writing pytorch_openreg.egg-info/PKG-INFO 2025-03-17T17:47:27.9672615Z writing dependency_links to pytorch_openreg.egg-info/dependency_links.txt 2025-03-17T17:47:27.9674615Z writing requirements to pytorch_openreg.egg-info/requires.txt 2025-03-17T17:47:27.9675522Z writing top-level names to pytorch_openreg.egg-info/top_level.txt 2025-03-17T17:47:27.9676922Z writing manifest file 'pytorch_openreg.egg-info/SOURCES.txt' 2025-03-17T17:47:27.9752999Z reading manifest file 'pytorch_openreg.egg-info/SOURCES.txt' 2025-03-17T17:47:27.9760389Z writing manifest file 'pytorch_openreg.egg-info/SOURCES.txt' 2025-03-17T17:47:27.9761807Z Copying pytorch_openreg.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/pytorch_openreg-1.0-py3.13.egg-info 2025-03-17T17:47:27.9767699Z running install_scripts 2025-03-17T17:47:28.3792580Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:47:28.3795613Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_transformers_privateuse1.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:47:28.379318] 2025-03-17T17:47:36.3545930Z 2025-03-17T17:47:36.3547072Z test_transformers_privateuse1 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_transformers_privateuse1_1.1_539f378470681bfd_.log 2025-03-17T17:47:36.3549745Z Running 3 items in this shard: test/test_transformers_privateuse1.py::TestSDPAPrivateUse1Only::test_fused_sdp_choice_privateuseone, test/test_transformers_privateuse1.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable, test/test_transformers_privateuse1.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable_backward 2025-03-17T17:47:36.3551809Z 2025-03-17T17:47:36.3552182Z Running test_show_pickle 1/1 ... [2025-03-17 17:47:36.354759] 2025-03-17T17:47:36.3552929Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:47:36.3554375Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_show_pickle.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:47:36.355071] 2025-03-17T17:47:40.0245655Z 2025-03-17T17:47:40.0246837Z test_show_pickle 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_show_pickle_1.1_b9ad8ad54d2c1d87_.log 2025-03-17T17:47:40.0247857Z Running 1 items in this shard: test/test_show_pickle.py::TestShowPickle::test_scripted_model 2025-03-17T17:47:40.0248363Z 2025-03-17T17:47:40.0249093Z Running test_torch 1/1 ... [2025-03-17 17:47:40.024737] 2025-03-17T17:47:40.0249785Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:47:40.0252677Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_torch.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:47:40.025031] 2025-03-17T17:52:36.0910010Z 2025-03-17T17:52:36.0911195Z test_torch 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_torch_1.1_b84af707d22a9fa2_.log 2025-03-17T17:52:36.1252787Z Running 1035 items in this shard: test/test_torch.py::TestBasicVitalSigns::test_basic_vitals, test/test_torch.py::TestBasicVitalSigns::test_basic_vitals_read_write, test/test_torch.py::TestBasicVitalSigns::test_dataloader_vitals, test/test_torch.py::TestTorch::test_RNGState, test/test_torch.py::TestTorch::test_RNGStateAliasing, test/test_torch.py::TestTorch::test_RNG_after_pickle, test/test_torch.py::TestTorch::test_Size, test/test_torch.py::TestTorch::test_Size_iter, test/test_torch.py::TestTorch::test_Size_scalar, test/test_torch.py::TestTorch::test_add_meta_scalar, test/test_torch.py::TestTorch::test_allow_tensor_metadata_change, test/test_torch.py::TestTorch::test_apply, test/test_torch.py::TestTorch::test_as_subclass, test/test_torch.py::TestTorch::test_assert_async, test/test_torch.py::TestTorch::test_backward_hooks_traverse, test/test_torch.py::TestTorch::test_batch_norm_cpu_inference, test/test_torch.py::TestTorch::test_bf16_supported_on_cpu, test/test_torch.py::TestTorch::test_bmm_multithreaded, test/test_torch.py::TestTorch::test_boxMullerState, test/test_torch.py::TestTorch::test_cat_neg_dim, test/test_torch.py::TestTorch::test_check, test/test_torch.py::TestTorch::test_chunk_neg_dim, test/test_torch.py::TestTorch::test_conj_neg_tolist, test/test_torch.py::TestTorch::test_conj_physical_meta_stride, test/test_torch.py::TestTorch::test_contains, test/test_torch.py::TestTorch::test_copy_broadcast, test/test_torch.py::TestTorch::test_copy_dtypes, test/test_torch.py::TestTorch::test_copy_float16, test/test_torch.py::TestTorch::test_copy_many_to_one, test/test_torch.py::TestTorch::test_copy_transpose, test/test_torch.py::TestTorch::test_cuda_not_built, test/test_torch.py::TestTorch::test_cummax_neg_dim, test/test_torch.py::TestTorch::test_cummin_neg_dim, test/test_torch.py::TestTorch::test_cumprod_neg_dim, test/test_torch.py::TestTorch::test_cumsum_neg_dim, test/test_torch.py::TestTorch::test_cxx_flags, test/test_torch.py::TestTorch::test_data_ptr_of_empty_tensor_with_storage, test/test_torch.py::TestTorch::test_data_ptr_of_empty_view_with_storage, test/test_torch.py::TestTorch::test_deepcopy_gradient, test/test_torch.py::TestTorch::test_deepcopy_parameter, test/test_torch.py::TestTorch::test_deterministic_fill_uninitialized_memory, test/test_torch.py::TestTorch::test_deterministic_flag, test/test_torch.py::TestTorch::test_device, test/test_torch.py::TestTorch::test_dim_order, test/test_torch.py::TestTorch::test_dir, test/test_torch.py::TestTorch::test_doc, test/test_torch.py::TestTorch::test_doc_template, test/test_torch.py::TestTorch::test_dot_data_use, test/test_torch.py::TestTorch::test_dtype_is_signed, test/test_torch.py::TestTorch::test_element_size, test/test_torch.py::TestTorch::test_empty_meta, test/test_torch.py::TestTorch::test_empty_storage_view, test/test_torch.py::TestTorch::test_equal, test/test_torch.py::TestTorch::test_error_msg_type_translation, test/test_torch.py::TestTorch::test_fill_diagonal, test/test_torch.py::TestTorch::test_format_scalar_meta, test/test_torch.py::TestTorch::test_from_buffer, test/test_torch.py::TestTorch::test_from_file, test/test_torch.py::TestTorch::test_gather_neg_dim, test/test_torch.py::TestTorch::test_generator_cpu, test/test_torch.py::TestTorch::test_get_cpu_capability, test/test_torch.py::TestTorch::test_has_internal_overlap, test/test_torch.py::TestTorch::test_has_storage, test/test_torch.py::TestTorch::test_index_add, test/test_torch.py::TestTorch::test_index_add_all_dtypes, test/test_torch.py::TestTorch::test_index_add_cornercase, test/test_torch.py::TestTorch::test_index_add_correctness, test/test_torch.py::TestTorch::test_index_add_neg_dim, test/test_torch.py::TestTorch::test_index_copy_neg_dim, test/test_torch.py::TestTorch::test_index_fill_neg_dim, test/test_torch.py::TestTorch::test_index_select_neg_dim, test/test_torch.py::TestTorch::test_invalid_arg_error_handling, test/test_torch.py::TestTorch::test_invalid_generator_raises, test/test_torch.py::TestTorch::test_is_nonzero, test/test_torch.py::TestTorch::test_is_same_size, test/test_torch.py::TestTorch::test_iter, test/test_torch.py::TestTorch::test_kthvalue_neg_dim, test/test_torch.py::TestTorch::test_linspace_logspace, test/test_torch.py::TestTorch::test_logcumsumexp_neg_dim, test/test_torch.py::TestTorch::test_manual_seed, test/test_torch.py::TestTorch::test_map, test/test_torch.py::TestTorch::test_map2, test/test_torch.py::TestTorch::test_max_neg_dim, test/test_torch.py::TestTorch::test_mean_neg_dim, test/test_torch.py::TestTorch::test_median_neg_dim, test/test_torch.py::TestTorch::test_memory_format, test/test_torch.py::TestTorch::test_memory_format_contiguous_returns_same_tensor_if_already_satisfies, test/test_torch.py::TestTorch::test_memory_format_empty, test/test_torch.py::TestTorch::test_min_neg_dim, test/test_torch.py::TestTorch::test_mode_neg_dim, test/test_torch.py::TestTorch::test_multinomial_invalid_probs, test/test_torch.py::TestTorch::test_nanmedian_neg_dim, test/test_torch.py::TestTorch::test_narrow_neg_dim, test/test_torch.py::TestTorch::test_nbytes, test/test_torch.py::TestTorch::test_ndim, test/test_torch.py::TestTorch::test_new, test/test_torch.py::TestTorch::test_newaxis_numpy_comparison, test/test_torch.py::TestTorch::test_newindex, test/test_torch.py::TestTorch::test_no_cuda_monkeypatch, test/test_torch.py::TestTorch::test_norm_neg_dim, test/test_torch.py::TestTorch::test_normal_shape, test/test_torch.py::TestTorch::test_numel, test/test_torch.py::TestTorch::test_parallel_info, test/test_torch.py::TestTorch::test_parsing_double, test/test_torch.py::TestTorch::test_parsing_int64, test/test_torch.py::TestTorch::test_parsing_intlist, test/test_torch.py::TestTorch::test_permute, test/test_torch.py::TestTorch::test_pickle, test/test_torch.py::TestTorch::test_pickle_dtype, test/test_torch.py::TestTorch::test_pickle_function, test/test_torch.py::TestTorch::test_pickle_generator, test/test_torch.py::TestTorch::test_pickle_parameter, test/test_torch.py::TestTorch::test_pickle_parameter_no_requires_grad, test/test_torch.py::TestTorch::test_pickle_size, test/test_torch.py::TestTorch::test_pin_memory, test/test_torch.py::TestTorch::test_print, test/test_torch.py::TestTorch::test_prod_neg_dim, test/test_torch.py::TestTorch::test_pyobj_preserved, test/test_torch.py::TestTorch::test_qengine, test/test_torch.py::TestTorch::test_renorm_neg_dim, test/test_torch.py::TestTorch::test_resizable, test/test_torch.py::TestTorch::test_reversed, test/test_torch.py::TestTorch::test_scatter_neg_dim, test/test_torch.py::TestTorch::test_select_neg_dim, test/test_torch.py::TestTorch::test_set_flush_denormal, test/test_torch.py::TestTorch::test_setting_real_imag_to_a_number, test/test_torch.py::TestTorch::test_show_config, test/test_torch.py::TestTorch::test_size_neg_dim, test/test_torch.py::TestTorch::test_size_stride, test/test_torch.py::TestTorch::test_sizeof, test/test_torch.py::TestTorch::test_slice, test/test_torch.py::TestTorch::test_slow_test, test/test_torch.py::TestTorch::test_sobolengine_bounds, test/test_torch.py::TestTorch::test_sobolengine_bounds_scrambled, test/test_torch.py::TestTorch::test_sobolengine_continuing, test/test_torch.py::TestTorch::test_sobolengine_continuing_scrambled, test/test_torch.py::TestTorch::test_sobolengine_default_dtype, test/test_torch.py::TestTorch::test_sobolengine_distribution, test/test_torch.py::TestTorch::test_sobolengine_distribution_scrambled, test/test_torch.py::TestTorch::test_sobolengine_draw, test/test_torch.py::TestTorch::test_sobolengine_draw_base2, test/test_torch.py::TestTorch::test_sobolengine_draw_base2_scrambled, test/test_torch.py::TestTorch::test_sobolengine_draw_scrambled, test/test_torch.py::TestTorch::test_sobolengine_fast_forward, test/test_torch.py::TestTorch::test_sobolengine_fast_forward_scrambled, test/test_torch.py::TestTorch::test_sobolengine_first_point, test/test_torch.py::TestTorch::test_sobolengine_high_dim, test/test_torch.py::TestTorch::test_sobolengine_raise, test/test_torch.py::TestTorch::test_sobolengine_reset, test/test_torch.py::TestTorch::test_sobolengine_reset_scrambled, test/test_torch.py::TestTorch::test_sort_neg_dim, test/test_torch.py::TestTorch::test_split_neg_dim, test/test_torch.py::TestTorch::test_split_with_sizes_copy_out, test/test_torch.py::TestTorch::test_squeeze_neg_dim, test/test_torch.py::TestTorch::test_std_neg_dim, test/test_torch.py::TestTorch::test_storage_base_init, test/test_torch.py::TestTorch::test_storage_base_new, test/test_torch.py::TestTorch::test_storage_byteswap, test/test_torch.py::TestTorch::test_storage_casts, test/test_torch.py::TestTorch::test_storage_cycle_via_dict, test/test_torch.py::TestTorch::test_storage_cycle_via_slots, test/test_torch.py::TestTorch::test_storage_dead_weak_ref, test/test_torch.py::TestTorch::test_storage_dealloc, test/test_torch.py::TestTorch::test_storage_dealloc_resurrected, test/test_torch.py::TestTorch::test_storage_dealloc_subclass_resurrected, test/test_torch.py::TestTorch::test_storage_dealloc_subclass_zombie, test/test_torch.py::TestTorch::test_storage_dict_dealloc, test/test_torch.py::TestTorch::test_storage_error, test/test_torch.py::TestTorch::test_storage_error_no_attribute, test/test_torch.py::TestTorch::test_storage_finalizer_dealloc, test/test_torch.py::TestTorch::test_storage_fix_weakref_no_leak, test/test_torch.py::TestTorch::test_storage_from_tensor_dealloc, test/test_torch.py::TestTorch::test_storage_from_tensor_dealloc_resurrected, test/test_torch.py::TestTorch::test_storage_from_tensor_dealloc_zombie, test/test_torch.py::TestTorch::test_storage_preserve_nonhermetic_in_hermetic_context, test/test_torch.py::TestTorch::test_storage_resurrected_weak_ref, test/test_torch.py::TestTorch::test_storage_slot_dealloc, test/test_torch.py::TestTorch::test_storage_weakref_dealloc, test/test_torch.py::TestTorch::test_structseq_repr, test/test_torch.py::TestTorch::test_subclass_preserved, test/test_torch.py::TestTorch::test_subclass_tensors, test/test_torch.py::TestTorch::test_sum_neg_dim, test/test_torch.py::TestTorch::test_swap_basic, test/test_torch.py::TestTorch::test_swap_fail_slots, test/test_torch.py::TestTorch::test_t_not_2d_error, test/test_torch.py::TestTorch::test_tensor_base_init, test/test_torch.py::TestTorch::test_tensor_base_new, test/test_torch.py::TestTorch::test_tensor_ctor_scalar, test/test_torch.py::TestTorch::test_tensor_cycle_via_dict, test/test_torch.py::TestTorch::test_tensor_cycle_via_slots, test/test_torch.py::TestTorch::test_tensor_dead_weak_ref, test/test_torch.py::TestTorch::test_tensor_dict_dealloc, test/test_torch.py::TestTorch::test_tensor_finalizer_dealloc, test/test_torch.py::TestTorch::test_tensor_fix_weakref_no_leak, test/test_torch.py::TestTorch::test_tensor_ressurecting_clear, test/test_torch.py::TestTorch::test_tensor_resurrected_weak_ref, test/test_torch.py::TestTorch::test_tensor_set, test/test_torch.py::TestTorch::test_tensor_set_errors, test/test_torch.py::TestTorch::test_tensor_slot_dealloc, test/test_torch.py::TestTorch::test_tensor_weakref_dealloc, test/test_torch.py::TestTorch::test_tensor_where_scalar, test/test_torch.py::TestTorch::test_tensoriterator_output_setup, test/test_torch.py::TestTorch::test_terminate_handler_on_crash, test/test_torch.py::TestTorch::test_to, test/test_torch.py::TestTorch::test_to_with_tensor, test/test_torch.py::TestTorch::test_topk_neg_dim, test/test_torch.py::TestTorch::test_torch_from_file, test/test_torch.py::TestTorch::test_transpose_neg_dim, test/test_torch.py::TestTorch::test_type, test/test_torch.py::TestTorch::test_type_alias, test/test_torch.py::TestTorch::test_type_conversion_via_dtype_name, test/test_torch.py::TestTorch::test_typed_storage_deprecation_warning, test/test_torch.py::TestTorch::test_typed_storage_internal_no_warning, test/test_torch.py::TestTorch::test_unbind_neg_dim, test/test_torch.py::TestTorch::test_unflatten, test/test_torch.py::TestTorch::test_unfold_neg_dim, test/test_torch.py::TestTorch::test_unsqueeze_neg_dim, test/test_torch.py::TestTorch::test_upsample_nearest1d_meta, test/test_torch.py::TestTorch::test_upsample_nearest2d_meta, test/test_torch.py::TestTorch::test_var_neg_dim, test/test_torch.py::TestTorch::test_warn_types, test/test_torch.py::TestTorch::test_wildcard_import, test/test_torch.py::TestVitalSignsCudaCPU::test_cuda_vitals_gpu_only_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test__local_scalar_dense_with_empty_tensor_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcdiv_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_cuda_errors_with_cpu_scalars_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_False_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_addcmul_use_cpu_scalar_True_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_assertRaisesRegex_ignore_msg_non_native_device_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_edge_cases_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_edge_cases_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_edge_cases_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_p_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_p_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_p_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_p_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_bernoulli_self_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_bfloat16_neg_abs_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_bool_tensor_value_change_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_add_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_addcdiv_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_addcmul_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_atan2_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_copy_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_dist_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_div_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_eq_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_fmod_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_ge_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_gt_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_le_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_lerp_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_lt_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_map2_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_map_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_masked_fill_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_masked_scatter_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_masked_select_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_max_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_min_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_mul_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_ne_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_pow_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_remainder_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_broadcast_fn_sub_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_bytes_to_scalar_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_kstest_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_no_inf_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_cauchy_no_inf_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_cuda_backward_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_euclidean_large_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_grad_p_lt_1_no_nan_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_large_batch_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_large_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_non_contiguous_batch_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_non_contiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_norm_batch_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_norm_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cdist_same_inputs_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_check_tensor_all_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_check_tensor_internal_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_clone_all_dtypes_and_devices_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_clone_not_memory_dense_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_clone_zero_stride_dim_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_complex_half_experimental_warning_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_constants_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_conv_transposed_backward_agnostic_to_memory_format_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_conv_transposed_large_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_complex32, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy__cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_all_dtypes_and_devices_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_math_view_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_mem_overlap_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_transpose_math_view_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_transpose_math_view_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_copy_transpose_math_view_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_corrcoef_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_corrcoef_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_corrcoef_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_cov_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cpp_warnings_have_python_context_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cublas_config_nondeterministic_alert_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cummax_cummin_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cummax_discontiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cummin_discontiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cumprod_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cumsum_64bit_indexing_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_cumsum_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_deepcopy_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deepcopy_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deepcopy_scalar_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deepcopy_scalar_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_cumsum_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_complex32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_empty_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_interpolate_bilinear_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_replication_pad2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_deterministic_resize_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_device_guard_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_diff_noncontig_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_dim_function_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_discontiguous_out_cumsum_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_dist_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_dtypetensor_warnings_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_errors_index_copy_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_expected_failure_xla_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_kstest_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_kstest_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_kstest_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_kstest_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_no_zero_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_exponential_no_zero_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_gather_backward_deterministic_path_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_gather_backward_one_dim_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_geometric_kstest_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scale_will_not_overflow_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaler_deprecated_warning_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaler_pass_itself_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_accumulation_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach0_fused0_AdamW_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach0_fused0_Adam_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach0_fused0_SGD_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach2_fused_True_AdamW_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach2_fused_True_Adam_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach2_fused_True_SGD_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach_True_fused1_AdamW_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach_True_fused1_Adam_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_autocast_foreach_True_fused1_SGD_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_clipping_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_clipping_separate_unscale_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_multiple_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_penalty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_state_dict_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_unscale_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_unscale_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_unscale_sparse_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_grad_scaling_update_scale_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_all_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_all_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_all_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_extreme_cases_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_extreme_cases_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_extreme_cases_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_spacing_list_length_error_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_spacing_list_length_error_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_spacing_list_length_error_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_gradient_type_promotion_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_hook_remove_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_add_deterministic_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_add_large_inputs_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_add_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_deterministic_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_copy_scalars_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_fill_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_put_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_put_non_accumulate_deterministic_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amax_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_amin_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_mean_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_reduce_reduce_prod_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float8_e4m3fn, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float8_e4m3fnuz, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float8_e5m2, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_float8_e5m2fnuz, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_index_select_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_int64_upsample3d_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_invalid_shapes_grid_sampler_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_is_set_to_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_is_signed_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_complex32, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float8_e4m3fn, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float8_e4m3fnuz, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float8_e5m2, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_float8_e5m2fnuz, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_item_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_large_cumprod_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_large_cumsum_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_binary_op_no_materialize_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_lazy_clone_view_materialize_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_log_normal_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_log_normal_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_log_normal_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_log_normal_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_logcumsumexp_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_lognormal_kstest_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_bool_tensor_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_bfloat16_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_bfloat16_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_bool_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_bool_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_complex128_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_complex128_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_complex64_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_complex64_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float16_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float16_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float32_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float32_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float64_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_float64_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int16_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int16_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int32_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int32_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int64_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int64_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int8_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_int8_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_uint8_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_cpu_uint8_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_fill_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_bool_tensor_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_inplace_noncontiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_large_tensor_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_scatter_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_masked_select_discontiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_clone_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_consistency_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_cpu_and_cuda_ops_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_empty_like_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_factory_like_functions_preserve_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_operators_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_preserved_after_permute_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_propagation_rules_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_to_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_type_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_memory_format_type_shortcuts_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_module_share_memory_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_deterministic_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_deterministic_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_deterministic_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_device_constrain_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_empty_w_replacement_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_empty_wo_replacement_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_gpu_device_constrain_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_multinomial_rng_state_advance_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_narrow_copy_non_contiguous_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_narrow_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_AdaptiveAvgPool2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_AdaptiveAvgPool3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_AdaptiveMaxPool2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_AvgPool3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_CTCLoss_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_EmbeddingBag_max_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_FractionalMaxPool2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_FractionalMaxPool3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxPool3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool1d_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool1d_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool1d_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool2d_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool2d_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool2d_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool3d_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool3d_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_MaxUnpool3d_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_NLLLoss_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_ReflectionPad1d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_ReflectionPad3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_ReplicationPad1d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_ReplicationPad2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_ReplicationPad3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_bincount_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_grid_sample_2d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_grid_sample_3d_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_histc_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_interpolate_bicubic_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_interpolate_bilinear_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_interpolate_linear_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_interpolate_trilinear_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_kthvalue_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_median_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_put_accumulate_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_alert_put_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_resize_quantized_cpu_qint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_resize_quantized_cpu_qint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_resize_quantized_cpu_quint2x4, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_resize_quantized_cpu_quint4x2, test/test_torch.py::TestTorchDeviceTypeCPU::test_nondeterministic_resize_quantized_cpu_quint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_normal_kstest_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_normal_kstest_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_normal_kstest_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_nullary_op_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_pairwise_distance_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_parallel_cow_materialize_error_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_params_invalidated_with_grads_invalidated_between_unscale_and_step_AdamW_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_params_invalidated_with_grads_invalidated_between_unscale_and_step_Adam_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_params_invalidated_with_grads_invalidated_between_unscale_and_step_SGD_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_pdist_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_pdist_norm_large_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_pickle_gradscaler_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_pin_memory_from_constructor_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_accumulate_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_put_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_reduced_type_float_copy_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_reduced_type_float_copy_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_repeat_interleave_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scalar_check_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_add_bool_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_add_non_unique_index_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_add_one_dim_deterministic_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_add_to_large_input_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_bool_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_non_unique_index_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_operations_to_large_input_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_reduce_scalar_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_to_large_input_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_scatter_zero_size_index_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_serialization_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_default_tensor_type_warnings_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_set_storage_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_shift_mem_overlap_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_skip_xla_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_all_devices_non_blocking_False_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_all_devices_non_blocking_True_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_errors_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_from_tensor_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_meta_ok_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_qint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_qint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_quint4x2, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_quint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_storage_setitem_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_strides_propagation_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_sync_warning_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_take_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_uint16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_uint32, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_uint64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_from_storage_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_set_errors_multigpu_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_shape_empty_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_storage_type_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_tensor_type_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_ternary_op_mem_overlap_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_bool, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_complex128, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_complex64, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_int16, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_int32, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_int64, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_int8, test/test_torch.py::TestTorchDeviceTypeCPU::test_typed_storage_meta_cpu_uint8, test/test_torch.py::TestTorchDeviceTypeCPU::test_uniform_kstest_cpu_bfloat16, test/test_torch.py::TestTorchDeviceTypeCPU::test_uniform_kstest_cpu_float16, test/test_torch.py::TestTorchDeviceTypeCPU::test_uniform_kstest_cpu_float32, test/test_torch.py::TestTorchDeviceTypeCPU::test_uniform_kstest_cpu_float64, test/test_torch.py::TestTorchDeviceTypeCPU::test_untyped_storage_meta_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_warn_always_caught_cpu, test/test_torch.py::TestTorchDeviceTypeCPU::test_where_scalar_handcrafted_values_cpu 2025-03-17T17:52:36.1584050Z 2025-03-17T17:52:36.1584299Z Running test_ci_sanity_check_fail 1/1 ... [2025-03-17 17:52:36.093209] 2025-03-17T17:52:36.1584789Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:52:36.1585921Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_ci_sanity_check_fail.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:52:36.093526] 2025-03-17T17:52:46.9153704Z Running test_fake_tensor 1/1 ... [2025-03-17 17:52:46.915031] 2025-03-17T17:52:46.9154219Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:52:46.9156546Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_fake_tensor.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:52:46.915395] 2025-03-17T17:53:05.3530958Z 2025-03-17T17:53:05.3532501Z test_fake_tensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_fake_tensor_1.1_28139d77a1d1428b_.log 2025-03-17T17:53:05.3648791Z Running 268 items in this shard: test/test_fake_tensor.py::FakeTensorTest::test__adaptive_avg_pool2d_backward, test/test_fake_tensor.py::FakeTensorTest::test_alias_call, test/test_fake_tensor.py::FakeTensorTest::test_allow_meta, test/test_fake_tensor.py::FakeTensorTest::test_aten_copy_multi_device, test/test_fake_tensor.py::FakeTensorTest::test_aten_index_multi_device, test/test_fake_tensor.py::FakeTensorTest::test_aten_slice_scatter_multi_device, test/test_fake_tensor.py::FakeTensorTest::test_basic, test/test_fake_tensor.py::FakeTensorTest::test_batch_tensor, test/test_fake_tensor.py::FakeTensorTest::test_binary_op_type_promotion, test/test_fake_tensor.py::FakeTensorTest::test_constructor, test/test_fake_tensor.py::FakeTensorTest::test_convert_fake_to_real, test/test_fake_tensor.py::FakeTensorTest::test_cpu_fallback, test/test_fake_tensor.py::FakeTensorTest::test_cuda_initialized, test/test_fake_tensor.py::FakeTensorTest::test_cuda_lstm, test/test_fake_tensor.py::FakeTensorTest::test_cudnn_rnn_with_fallback, test/test_fake_tensor.py::FakeTensorTest::test_cudnn_rnn_without_fallback, test/test_fake_tensor.py::FakeTensorTest::test_custom_op_fallback, test/test_fake_tensor.py::FakeTensorTest::test_data_dependent_operator, test/test_fake_tensor.py::FakeTensorTest::test_deepcopy, test/test_fake_tensor.py::FakeTensorTest::test_device_inplace_copy, test/test_fake_tensor.py::FakeTensorTest::test_embedding_bag_meta, test/test_fake_tensor.py::FakeTensorTest::test_export_numpy, test/test_fake_tensor.py::FakeTensorTest::test_fake_dispatch_keys, test/test_fake_tensor.py::FakeTensorTest::test_fake_grad_copy, test/test_fake_tensor.py::FakeTensorTest::test_fake_mode_error, test/test_fake_tensor.py::FakeTensorTest::test_from_numpy, test/test_fake_tensor.py::FakeTensorTest::test_fsdp_flat_param, test/test_fake_tensor.py::FakeTensorTest::test_full, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_complex128, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_complex64, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float32, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float64, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float8_e4m3fn, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float8_e4m3fnuz, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float8_e5m2, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_float8_e5m2fnuz, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_int16, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_int32, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_int64, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_int8, test/test_fake_tensor.py::FakeTensorTest::test_index_cuda_with_cpu_uint8, test/test_fake_tensor.py::FakeTensorTest::test_index_put_error, test/test_fake_tensor.py::FakeTensorTest::test_jagged_fake_to_fake_preserved, test/test_fake_tensor.py::FakeTensorTest::test_like_constructor, test/test_fake_tensor.py::FakeTensorTest::test_mixed_real_and_fake_inputs, test/test_fake_tensor.py::FakeTensorTest::test_mode, test/test_fake_tensor.py::FakeTensorTest::test_nan_to_num, test/test_fake_tensor.py::FakeTensorTest::test_new, test/test_fake_tensor.py::FakeTensorTest::test_non_kwarg_device, test/test_fake_tensor.py::FakeTensorTest::test_non_overlapping_stride_zero, test/test_fake_tensor.py::FakeTensorTest::test_non_parameter_grad, test/test_fake_tensor.py::FakeTensorTest::test_normalize_device, test/test_fake_tensor.py::FakeTensorTest::test_out_multi_device, test/test_fake_tensor.py::FakeTensorTest::test_parameter_instantiation, test/test_fake_tensor.py::FakeTensorTest::test_parameter_view, test/test_fake_tensor.py::FakeTensorTest::test_print_in_fake_mode, test/test_fake_tensor.py::FakeTensorTest::test_randperm, test/test_fake_tensor.py::FakeTensorTest::test_recursive_invocation, test/test_fake_tensor.py::FakeTensorTest::test_repr, test/test_fake_tensor.py::FakeTensorTest::test_same_shape_env_preserved, test/test_fake_tensor.py::FakeTensorTest::test_scalar_inputs, test/test_fake_tensor.py::FakeTensorTest::test_scan_reverse_False, test/test_fake_tensor.py::FakeTensorTest::test_scan_reverse_True, test/test_fake_tensor.py::FakeTensorTest::test_setitem, test/test_fake_tensor.py::FakeTensorTest::test_shape_take_not_device, test/test_fake_tensor.py::FakeTensorTest::test_split_return_self, test/test_fake_tensor.py::FakeTensorTest::test_throw, test/test_fake_tensor.py::FakeTensorTest::test_tolist, test/test_fake_tensor.py::FakeTensorTest::test_type_as, test/test_fake_tensor.py::FakeTensorTest::test_unsqueeze_copy, test/test_fake_tensor.py::FakeTensorTest::test_upsample_bilinear_small_channels, test/test_fake_tensor.py::FakeTensorTest::test_zero_dim, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test__adaptive_avg_pool2d_backward_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_alias_call_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_allow_meta_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_aten_copy_multi_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_aten_index_multi_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_aten_slice_scatter_multi_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_basic_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_batch_tensor_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_binary_op_type_promotion_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_constructor_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_convert_fake_to_real_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_cpu_fallback_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_cuda_initialized_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_cuda_lstm_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_cudnn_rnn_with_fallback_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_cudnn_rnn_without_fallback_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_custom_op_fallback_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_data_dependent_operator_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_deepcopy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_device_inplace_copy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_embedding_bag_meta_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_export_numpy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_fake_dispatch_keys_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_fake_grad_copy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_fake_mode_error_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_from_numpy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_fsdp_flat_param_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_full_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_complex128_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_complex64_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float32_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float64_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float8_e4m3fn_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float8_e4m3fnuz_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float8_e5m2_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_float8_e5m2fnuz_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_int16_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_int32_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_int64_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_int8_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_cuda_with_cpu_uint8_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_index_put_error_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_jagged_fake_to_fake_preserved_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_like_constructor_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_mixed_real_and_fake_inputs_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_mode_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_nan_to_num_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_new_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_non_kwarg_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_non_overlapping_stride_zero_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_non_parameter_grad_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_normalize_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_out_multi_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_parameter_instantiation_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_parameter_view_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_print_in_fake_mode_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_randperm_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_recursive_invocation_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_repr_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_same_shape_env_preserved_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_scalar_inputs_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_scan_reverse_False_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_scan_reverse_True_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_setitem_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_shape_take_not_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_split_return_self_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_throw_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_tolist_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_type_as_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_unsqueeze_copy_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_upsample_bilinear_small_channels_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorTest::test_zero_dim_propagate_real_tensors, test/test_fake_tensor.py::FakeTensorConstHandling::test_aliased_const_write, test/test_fake_tensor.py::FakeTensorConstHandling::test_constant_invalidation, test/test_fake_tensor.py::FakeTensorConstHandling::test_constant_propagate_through_functions, test/test_fake_tensor.py::FakeTensorConstHandling::test_fake_tensor_batch_norm_cpu, test/test_fake_tensor.py::FakeTensorConstHandling::test_fake_tensor_in_intlist_repro, test/test_fake_tensor.py::FakeTensorConstHandling::test_inplace_add, test/test_fake_tensor.py::FakeTensorConstHandling::test_inplace_view_invalidation, test/test_fake_tensor.py::FakeTensorConstHandling::test_shared_storage_invalidation, test/test_fake_tensor.py::FakeTensorConstHandling::test_shared_storages, test/test_fake_tensor.py::FakeTensorConstHandling::test_simple, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_aliased_const_write_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_constant_invalidation_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_constant_propagate_through_functions_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_fake_tensor_batch_norm_cpu_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_fake_tensor_in_intlist_repro_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_inplace_add_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_inplace_view_invalidation_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_shared_storage_invalidation_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_shared_storages_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConstHandling::test_simple_propagate_real_tensors, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyCatCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyCubeCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyMulCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyMulScalarCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyNMSCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyNonzeroCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpySortCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpySplitCopyCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpySplitCopyWithIntCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyTakeCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorOpInfoTestCPU::test_fake_NumpyViewCopyCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyCatCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyCubeCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyMulCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyMulScalarCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyNMSCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyNonzeroCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpySortCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpySplitCopyCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpySplitCopyWithIntCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyTakeCustomOp_cpu_float32, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOpInfoTestCPU::test_fake_propagate_real_tensors_NumpyViewCopyCustomOp_cpu_float32, test/test_fake_tensor.py::FakeTensorConverterTest::test_dead_key, test/test_fake_tensor.py::FakeTensorConverterTest::test_dead_weak_ref, test/test_fake_tensor.py::FakeTensorConverterTest::test_memoized_conversion_from_meta, test/test_fake_tensor.py::FakeTensorConverterTest::test_memoized_conversion_to_meta, test/test_fake_tensor.py::FakeTensorConverterTest::test_multiple_modes, test/test_fake_tensor.py::FakeTensorConverterTest::test_no_active_mode, test/test_fake_tensor.py::FakeTensorConverterTest::test_no_ref_cycle, test/test_fake_tensor.py::FakeTensorConverterTest::test_separate_mode_error, test/test_fake_tensor.py::FakeTensorConverterTest::test_separate_tensor_storages_non_view, test/test_fake_tensor.py::FakeTensorConverterTest::test_separate_tensor_storages_view, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_dead_key_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_dead_weak_ref_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_memoized_conversion_from_meta_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_memoized_conversion_to_meta_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_multiple_modes_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_no_active_mode_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_no_ref_cycle_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_separate_mode_error_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_separate_tensor_storages_non_view_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorConverterTest::test_separate_tensor_storages_view_propagate_real_tensors, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_conv_c1_backward, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_cross_entropy_loss, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_embedding_bag_private, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_fake_gpu_no_init, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_flash_attention, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_like_ops, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_no_dispatch_with_like_function, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_non_kwarg_only_device, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_sparse_new, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_str_storage, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_tensor_constructors_all_have_kwarg_device, test/test_fake_tensor.py::FakeTensorOperatorInvariants::test_tensor_new, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_conv_c1_backward_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_cross_entropy_loss_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_embedding_bag_private_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_fake_gpu_no_init_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_flash_attention_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_like_ops_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_no_dispatch_with_like_function_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_non_kwarg_only_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_sparse_new_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_str_storage_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_tensor_constructors_all_have_kwarg_device_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorOperatorInvariants::test_tensor_new_propagate_real_tensors, test/test_fake_tensor.py::FakeTensorPropTest::test_fake_tensor_prop_on_nn_module, test/test_fake_tensor.py::FakeTensorPropTest::test_fake_tensor_prop_on_nn_module_with_optional_args, test/test_fake_tensor.py::FakeTensorPropTest::test_nonzero_stride, test/test_fake_tensor.py::FakeTensorPropTest::test_torch_load_with_fake_mode, test/test_fake_tensor.py::FakeTensorPropTest::test_unbacked_shape_realloc, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorPropTest::test_fake_tensor_prop_on_nn_module_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorPropTest::test_fake_tensor_prop_on_nn_module_with_optional_args_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorPropTest::test_nonzero_stride_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorPropTest::test_torch_load_with_fake_mode_propagate_real_tensors, test/test_fake_tensor.py::PropagateRealTensorsFakeTensorPropTest::test_unbacked_shape_realloc_propagate_real_tensors, test/test_fake_tensor.py::FakeTensorSerialization::test_serialization, test/test_fake_tensor.py::FakeTensorSerialization::test_serialization_with_tracing, test/test_fake_tensor.py::FakeTensorDispatchCache::test__upsample_bilinear2d_aa_backward_dynamic_shapes, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_bypass, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_default_device, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_default_dtype, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_dispatch_key_set, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_hit, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_inplace_op, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_constants, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_device, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_dtype, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_is_conj, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_is_inference, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_is_neg, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_memory_format, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_requires_grad, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_shape, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_storage_offset, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_key_stride, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_tuple_outputs, test/test_fake_tensor.py::FakeTensorDispatchCache::test_cache_view_op, test/test_fake_tensor.py::FakeTensorDispatchCache::test_fft_hfft2_issue145522, test/test_fake_tensor.py::FakeTensorDispatchCache::test_from_buffer, test/test_fake_tensor.py::FakeTensorDispatchCache::test_inference_mode, test/test_fake_tensor.py::FakeTensorDispatchCache::test_meta_tensor_to_fake_cpu, test/test_fake_tensor.py::FakeTensorDispatchCache::test_shape_env_settings, test/test_fake_tensor.py::FakeTensorDispatchCache::test_wrapper_tensor_subclass_different_device 2025-03-17T17:53:05.3758756Z 2025-03-17T17:53:05.3758982Z Running test_jit_disabled 1/1 ... [2025-03-17 17:53:05.353907] 2025-03-17T17:53:05.3759421Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:05.3760496Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_jit_disabled.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:05.354292] 2025-03-17T17:53:08.8732189Z 2025-03-17T17:53:08.8733402Z test_jit_disabled 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_jit_disabled_1.1_596f15a7188a449a_.log 2025-03-17T17:53:08.8735256Z Running 3 items in this shard: test/test_jit_disabled.py::TestJitDisabled::test_attribute, test/test_jit_disabled.py::TestJitDisabled::test_recursive_script, test/test_jit_disabled.py::TestJitDisabled::test_script_module_construction 2025-03-17T17:53:08.8736517Z 2025-03-17T17:53:08.8736724Z Running test_autocast 1/1 ... [2025-03-17 17:53:08.873350] 2025-03-17T17:53:08.8737458Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:08.8738643Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_autocast.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:08.873631] 2025-03-17T17:53:23.2560840Z 2025-03-17T17:53:23.2561864Z test_autocast 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_autocast_1.1_cc69e67092fa0da7_.log 2025-03-17T17:53:23.2569367Z Running 20 items in this shard: test/test_autocast.py::TestAutocastCPU::test_autocast_disabled_with_fp32_dtype, test/test_autocast.py::TestAutocastCPU::test_autocast_methods_expect_builtin_promote, test/test_autocast.py::TestAutocastCPU::test_autocast_nn_16, test/test_autocast.py::TestAutocastCPU::test_autocast_nn_fp32, test/test_autocast.py::TestAutocastCPU::test_autocast_rnn, test/test_autocast.py::TestAutocastCPU::test_autocast_torch_16, test/test_autocast.py::TestAutocastCPU::test_autocast_torch_expect_builtin_promote, test/test_autocast.py::TestAutocastCPU::test_autocast_torch_fp32, test/test_autocast.py::TestAutocastCPU::test_autocast_torch_need_autocast_promote, test/test_autocast.py::TestAutocastCPU::test_cpu_autocast_deprecated_warning, test/test_autocast.py::TestAutocastCPU::test_generic_autocast, test/test_autocast.py::TestAutocastGPU::test_autocast_prioritize, test/test_autocast.py::TestAutocastGPU::test_cache_disabled, test/test_autocast.py::TestAutocastGPU::test_cast_cache_is_global, test/test_autocast.py::TestAutocastMPS::test_cast_cache_is_global, test/test_autocast.py::TestAutocastMPS::test_mps_autocast_bfloat16_supported, test/test_autocast.py::TestAutocastMPS::test_mps_autocast_error_message, test/test_autocast.py::TestTorchAutocast::test_autocast_fast_dtype, test/test_autocast.py::TestTorchAutocast::test_invalid_device, test/test_autocast.py::TestTorchAutocast::test_non_string_device 2025-03-17T17:53:23.2575605Z 2025-03-17T17:53:23.2575816Z Running test_python_dispatch 1/1 ... [2025-03-17 17:53:23.256290] 2025-03-17T17:53:23.2576266Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:23.2577352Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_python_dispatch.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:23.256629] 2025-03-17T17:53:38.8403619Z 2025-03-17T17:53:38.8404768Z test_python_dispatch 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_python_dispatch_1.1_119ead219a3d25e8_.log 2025-03-17T17:53:38.8449885Z Running 114 items in this shard: test/test_python_dispatch.py::TestDispatcherPythonBindings::test_call_boxed, test/test_python_dispatch.py::TestPythonRegistration::test_alias_analysis, test/test_python_dispatch.py::TestPythonRegistration::test_create_new_library, test/test_python_dispatch.py::TestPythonRegistration::test_create_new_library_fragment_no_existing, test/test_python_dispatch.py::TestPythonRegistration::test_create_new_library_fragment_with_existing, test/test_python_dispatch.py::TestPythonRegistration::test_error_for_unsupported_ns_or_kind, test/test_python_dispatch.py::TestPythonRegistration::test_error_if_fn_not_callable, test/test_python_dispatch.py::TestPythonRegistration::test_extend_library_with_dispatch_key_arg, test/test_python_dispatch.py::TestPythonRegistration::test_fallback, test/test_python_dispatch.py::TestPythonRegistration::test_fallback_fallthrough, test/test_python_dispatch.py::TestPythonRegistration::test_fallback_keyset, test/test_python_dispatch.py::TestPythonRegistration::test_fallthrough_for_dense_key_with_meta_in_tls, test/test_python_dispatch.py::TestPythonRegistration::test_finalizer, test/test_python_dispatch.py::TestPythonRegistration::test_override_aten_ops_with_multiple_libraries, test/test_python_dispatch.py::TestPythonRegistration::test_override_cpu_sum, test/test_python_dispatch.py::TestPythonRegistration::test_override_cuda_with_jiterator, test/test_python_dispatch.py::TestPythonRegistration::test_register_fallthrough, test/test_python_dispatch.py::TestPythonRegistration::test_returning_symint, test/test_python_dispatch.py::TestPythonDispatch::test_all_same_mode, test/test_python_dispatch.py::TestPythonDispatch::test_autograd_in_attr, test/test_python_dispatch.py::TestPythonDispatch::test_basic, test/test_python_dispatch.py::TestPythonDispatch::test_capture_logs_with_torch_dispatch_mode, test/test_python_dispatch.py::TestPythonDispatch::test_construct_int_tensor, test/test_python_dispatch.py::TestPythonDispatch::test_custom_autograd, test/test_python_dispatch.py::TestPythonDispatch::test_custom_size_policy_dynamic_shapes, test/test_python_dispatch.py::TestPythonDispatch::test_data_ptr_respects_numel_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_deepcopy_non_wrapper_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_deepcopy_wrapper_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_deepcopy_wrapper_subclass_with_clone_returning_different_type, test/test_python_dispatch.py::TestPythonDispatch::test_detach_appears_twice_when_called_once, test/test_python_dispatch.py::TestPythonDispatch::test_device_slowpath, test/test_python_dispatch.py::TestPythonDispatch::test_dim_slowpath, test/test_python_dispatch.py::TestPythonDispatch::test_dispatch_super_call, test/test_python_dispatch.py::TestPythonDispatch::test_dispatch_super_call_list_arg, test/test_python_dispatch.py::TestPythonDispatch::test_dispatch_super_dont_autograd, test/test_python_dispatch.py::TestPythonDispatch::test_error_using_class_method_on_mode, test/test_python_dispatch.py::TestPythonDispatch::test_exception_handling, test/test_python_dispatch.py::TestPythonDispatch::test_fancy_strides, test/test_python_dispatch.py::TestPythonDispatch::test_format, test/test_python_dispatch.py::TestPythonDispatch::test_get_cur_mode, test/test_python_dispatch.py::TestPythonDispatch::test_get_mode_stack, test/test_python_dispatch.py::TestPythonDispatch::test_index_put_where_only_index_is_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_invalid_ret, test/test_python_dispatch.py::TestPythonDispatch::test_is_contiguous_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_kwarg_only, test/test_python_dispatch.py::TestPythonDispatch::test_kwarg_only_and_positional_default, test/test_python_dispatch.py::TestPythonDispatch::test_layout_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_like, test/test_python_dispatch.py::TestPythonDispatch::test_list_ret, test/test_python_dispatch.py::TestPythonDispatch::test_make_fx_with_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_make_subclass_with_modes, test/test_python_dispatch.py::TestPythonDispatch::test_make_wrapper_subclass_noalloc, test/test_python_dispatch.py::TestPythonDispatch::test_make_wrapper_subclass_propagates_metadata, test/test_python_dispatch.py::TestPythonDispatch::test_maybe_tuple_bug, test/test_python_dispatch.py::TestPythonDispatch::test_mode_detection, test/test_python_dispatch.py::TestPythonDispatch::test_mode_with_make_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_multiple_ops_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_nested_push_logging_tensor_mode, test/test_python_dispatch.py::TestPythonDispatch::test_nesting_same_mode, test/test_python_dispatch.py::TestPythonDispatch::test_new_ones, test/test_python_dispatch.py::TestPythonDispatch::test_none_wrapping, test/test_python_dispatch.py::TestPythonDispatch::test_notimplemented_mode, test/test_python_dispatch.py::TestPythonDispatch::test_optional_tensor_list, test/test_python_dispatch.py::TestPythonDispatch::test_out, test/test_python_dispatch.py::TestPythonDispatch::test_produce_real_type, test/test_python_dispatch.py::TestPythonDispatch::test_record_stream, test/test_python_dispatch.py::TestPythonDispatch::test_return_and_correct_aliasing_gives_correct_stride, test/test_python_dispatch.py::TestPythonDispatch::test_return_stream, test/test_python_dispatch.py::TestPythonDispatch::test_set_data, test/test_python_dispatch.py::TestPythonDispatch::test_shallow_copy_and_detach, test/test_python_dispatch.py::TestPythonDispatch::test_sizes_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_standard_is_not_subclass, test/test_python_dispatch.py::TestPythonDispatch::test_storage, test/test_python_dispatch.py::TestPythonDispatch::test_storage_can_be_converted_to_python_object, test/test_python_dispatch.py::TestPythonDispatch::test_strides_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_subclass_autograd_device_check, test/test_python_dispatch.py::TestPythonDispatch::test_subclass_creation, test/test_python_dispatch.py::TestPythonDispatch::test_subclass_priority, test/test_python_dispatch.py::TestPythonDispatch::test_sym_sizes_strides_slow_path, test/test_python_dispatch.py::TestPythonDispatch::test_tolist_numpy_with_torch_dispatch_mode, test/test_python_dispatch.py::TestPythonDispatch::test_torch_dispatch_mode_basic, test/test_python_dispatch.py::TestPythonDispatch::test_torch_dispatch_mode_respects_no_dispatch, test/test_python_dispatch.py::TestPythonDispatch::test_torch_dispatch_mode_subclass_priority, test/test_python_dispatch.py::TestPythonDispatch::test_torch_dispatch_mode_unrelated_tensors, test/test_python_dispatch.py::TestPythonDispatch::test_version, test/test_python_dispatch.py::TestPythonDispatch::test_view_returns_alias_under_torch_dispatch, test/test_python_dispatch.py::TestPythonDispatch::test_with_mode_created_separately, test/test_python_dispatch.py::TestPythonDispatch::test_with_nested_modes, test/test_python_dispatch.py::TestPythonDispatch::test_wrapper_subclass_extra_dispatch_keys, test/test_python_dispatch.py::TestPythonDispatch::test_wrapper_subclass_multiprocessing_preserves_dtype, test/test_python_dispatch.py::TestPythonDispatch::test_wrapper_subclass_reentrant_dispatch_with_mode, test/test_python_dispatch.py::TestPythonDispatch::test_wrapper_subclass_serializes, test/test_python_dispatch.py::TestPythonDispatcher::test_basic, test/test_python_dispatch.py::TestPythonDispatcher::test_lstsq, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_cat_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_conv2d_cpu, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyCatCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyCubeCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyMulCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyMulScalarCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyNMSCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyNonzeroCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpySortCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpySplitCopyCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpySplitCopyWithIntCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyTakeCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_custom_NumpyViewCopyCustomOp_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_fft_fft2_cpu, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_mul_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_native_batch_norm_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_out_op_cpu, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_split_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_split_list_args_cpu_float32, test/test_python_dispatch.py::TestWrapperSubclassAliasingCPU::test_wrapper_subclass_aliasing_view_cpu_float32 2025-03-17T17:53:38.8494032Z 2025-03-17T17:53:38.8494296Z Running test_cpp_extensions_mtia_backend 1/1 ... [2025-03-17 17:53:38.840724] 2025-03-17T17:53:38.8510155Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:38.8511625Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cpp_extensions_mtia_backend.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:38.841033] 2025-03-17T17:53:42.7102852Z 2025-03-17T17:53:42.7104471Z test_cpp_extensions_mtia_backend 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cpp_extensions_mtia_backend_1.1_44084757394ae727_.log 2025-03-17T17:53:42.7109728Z Running 5 items in this shard: test/test_cpp_extensions_mtia_backend.py::TestCppExtensionMTIABackend::test_device_context, test/test_cpp_extensions_mtia_backend.py::TestCppExtensionMTIABackend::test_get_device_module, test/test_cpp_extensions_mtia_backend.py::TestCppExtensionMTIABackend::test_stream_basic, test/test_cpp_extensions_mtia_backend.py::TestCppExtensionMTIABackend::test_stream_context, test/test_cpp_extensions_mtia_backend.py::TestCppExtensionMTIABackend::test_stream_context_different_device 2025-03-17T17:53:42.7113135Z 2025-03-17T17:53:42.7113370Z Running test_autograd_fallback 1/1 ... [2025-03-17 17:53:42.710466] 2025-03-17T17:53:42.7113839Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:42.7114937Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_autograd_fallback.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:42.710835] 2025-03-17T17:53:47.8816070Z 2025-03-17T17:53:47.8817004Z test_autograd_fallback 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_autograd_fallback_1.1_1cbaae17125864ad_.log 2025-03-17T17:53:47.8830247Z Running 28 items in this shard: test/test_autograd_fallback.py::TestAutogradFallback::test_autograd_function_registered_to_cpu_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_autograd_function_registered_to_cpu_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_base_does_not_require_grad_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_base_does_not_require_grad_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_composite_registered_to_cpu_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_composite_registered_to_cpu_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_cpu_return_self_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_cpu_return_self_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_inplace_autograd_function_registered_to_cpu_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_inplace_autograd_function_registered_to_cpu_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_inplace_on_tensor_that_does_not_require_grad_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_inplace_on_tensor_that_does_not_require_grad_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_no_autograd_kernel_inplace_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_no_autograd_kernel_inplace_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_no_autograd_kernel_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_no_autograd_kernel_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_no_grad_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_no_grad_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_post_autograd_returns_leaf_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_post_autograd_returns_leaf_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_post_autograd_returns_mix_of_requires_grad_tensors_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_post_autograd_returns_mix_of_requires_grad_tensors_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_supports_tensor_lists_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_supports_tensor_lists_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_undefined_grads_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_undefined_grads_mode_warn, test/test_autograd_fallback.py::TestAutogradFallback::test_undefined_inputs_outputs_mode_nothing, test/test_autograd_fallback.py::TestAutogradFallback::test_undefined_inputs_outputs_mode_warn 2025-03-17T17:53:47.8842530Z 2025-03-17T17:53:47.8842749Z Running test_multiprocessing 1/1 ... [2025-03-17 17:53:47.881828] 2025-03-17T17:53:47.8843204Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:53:47.8844301Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_multiprocessing.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:53:47.882171] 2025-03-17T17:55:02.2978571Z 2025-03-17T17:55:02.2979600Z test_multiprocessing 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_multiprocessing_1.1_61ae5ef5222789ff_.log 2025-03-17T17:55:02.2995019Z Running 41 items in this shard: test/test_multiprocessing.py::TestMultiprocessing::test_autograd_errors, test/test_multiprocessing.py::TestMultiprocessing::test_autograd_fine_with_spawn, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_bad_call, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_ipc_deadlock, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_memory_allocation, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_parameter_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_send_many, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_simple, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_small_tensors, test/test_multiprocessing.py::TestMultiprocessing::test_cuda_variable_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_empty_shared, test/test_multiprocessing.py::TestMultiprocessing::test_empty_tensor_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_empty_tensor_sharing_cuda, test/test_multiprocessing.py::TestMultiprocessing::test_empty_tensor_sharing_meta, test/test_multiprocessing.py::TestMultiprocessing::test_event, test/test_multiprocessing.py::TestMultiprocessing::test_event_handle_exporter, test/test_multiprocessing.py::TestMultiprocessing::test_event_handle_importer, test/test_multiprocessing.py::TestMultiprocessing::test_event_handle_multi_gpu, test/test_multiprocessing.py::TestMultiprocessing::test_event_multiprocess, test/test_multiprocessing.py::TestMultiprocessing::test_fd_pool, test/test_multiprocessing.py::TestMultiprocessing::test_fd_preserve_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_fd_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_fs, test/test_multiprocessing.py::TestMultiprocessing::test_fs_is_shared, test/test_multiprocessing.py::TestMultiprocessing::test_fs_pool, test/test_multiprocessing.py::TestMultiprocessing::test_fs_preserve_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_fs_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_inherit_tensor, test/test_multiprocessing.py::TestMultiprocessing::test_integer_parameter_serialization_cpu, test/test_multiprocessing.py::TestMultiprocessing::test_integer_parameter_serialization_cuda, test/test_multiprocessing.py::TestMultiprocessing::test_is_shared, test/test_multiprocessing.py::TestMultiprocessing::test_is_shared_cuda, test/test_multiprocessing.py::TestMultiprocessing::test_leaf_variable_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_meta_simple, test/test_multiprocessing.py::TestMultiprocessing::test_mixed_types_cuda_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_non_leaf_variable_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_parameter_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_set_thread_name, test/test_multiprocessing.py::TestMultiprocessing::test_tensor_sharing_meta, test/test_multiprocessing.py::TestMultiprocessing::test_variable_sharing, test/test_multiprocessing.py::TestMultiprocessing::test_wrong_cuda_fork 2025-03-17T17:55:02.3008954Z 2025-03-17T17:55:02.3009225Z Running test_cpp_extensions_stream_and_event 1/1 ... [2025-03-17 17:55:02.298141] 2025-03-17T17:55:02.3009737Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:55:02.3010880Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cpp_extensions_stream_and_event.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:55:02.298478] 2025-03-17T17:55:06.1679891Z 2025-03-17T17:55:06.1681310Z test_cpp_extensions_stream_and_event 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cpp_extensions_stream_and_event_1.1_41f931f8a8314995_.log 2025-03-17T17:55:06.1682957Z Running 1 items in this shard: test/test_cpp_extensions_stream_and_event.py::TestCppExtensionStreamAndEvent::test_stream_event 2025-03-17T17:55:06.1683920Z 2025-03-17T17:55:06.1684218Z Running test_tensor_creation_ops 1/1 ... [2025-03-17 17:55:06.168180] 2025-03-17T17:55:06.1684683Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:55:06.1687674Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_tensor_creation_ops.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:55:06.168509] 2025-03-17T17:58:59.3717894Z 2025-03-17T17:58:59.3718928Z test_tensor_creation_ops 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_tensor_creation_ops_1.1_6781f75dec4cdf15_.log 2025-03-17T17:58:59.3963601Z Running 640 items in this shard: test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_device_vs_cpu_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_device_vs_cpu_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_device_vs_cpu_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_device_vs_cpu_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_inference_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_lowp_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_arange_lowp_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_as_strided_neg_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_as_tensor_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_block_diag_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_block_diag_scipy_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cartesian_prod_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat2_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat2_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat2_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_all_dtypes_and_devices_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_big_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_empty_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_empty_legacy_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_in_channels_last_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_mem_overlap_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_channels_last_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_uint16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_uint32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_uint64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_fast_path_dim0_dim1_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_out_memory_format_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_preserve_channels_last_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_cat_stack_cross_devices_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_combinations_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_complex_type_conversions_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_constructor_device_legacy_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_constructor_dtypes_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_ctor_with_numpy_array_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_device_rounding_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_device_rounding_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_diag_embed_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_diagflat_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dsplit_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dsplit_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dsplit_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_dstack_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_empty_full_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_empty_overflow_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_empty_strided_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_empty_tensor_props_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_eye_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_fill_all_dtypes_and_devices_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_finite_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_float_to_int_conversion_nonfinite_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_from_file_shared_False_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_from_file_shared_True_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_full_inference_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_full_inference_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_full_inference_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_full_out_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hsplit_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hsplit_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hsplit_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_hstack_column_stack_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_window_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_window_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_window_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_window_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_kaiser_window_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_large_linspace_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_large_linspace_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_like_fn_stride_proparation_vs_tensoriterator_unary_op_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linlogspace_mem_overlap_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_deduction_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_device_vs_cpu_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_special_steps_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_complex_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_integral_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_integral_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_integral_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_integral_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_linspace_vs_numpy_integral_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_base2_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_base2_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_deduction_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_device_vs_cpu_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_device_vs_cpu_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_special_steps_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_special_steps_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_vs_numpy_complex_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_vs_numpy_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_logspace_vs_numpy_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_default_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_empty_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_ij_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_ij_indexing_is_default_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_inconsistent_device_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_inconsistent_dtype_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_non_1d_tensor_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_unsupported_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_vs_numpy_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_warns_if_no_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_meshgrid_xy_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_new_empty_strided_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_new_methods_requires_grad_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_new_tensor_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_new_tensor_device_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_offset_scalar_cast_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_ones_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_bool_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_default_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_bool_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_uint16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_uint32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_from_to_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_uint16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_uint32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_full_range_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_uint16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_uint32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_random_to_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_range_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_range_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_range_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_range_factories_64bit_indexing_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_range_warning_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_bfloat16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_refs_tensor_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_repeat_interleave_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_roll_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_bartlett_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_bartlett_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_bartlett_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_blackman_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_blackman_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_blackman_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hamming_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hamming_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hamming_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hann_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hann_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_window_functions_window_hann_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_bartlett_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_bartlett_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_blackman_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_blackman_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_cosine_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_cosine_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_hamming_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_hamming_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_hann_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_hann_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_nuttall_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_signal_windows_functions_window_nuttall_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_simple_scalar_cast_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_stack_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_stack_out_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_storage_filename_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_strided_mismatched_stride_shape_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_ctor_device_inference_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_device_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factories_empty_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factory_copy_var_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factory_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factory_gpu_type_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factory_gpu_type_inference_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_factory_type_inference_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_from_non_writable_numpy_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_tensor_from_sequence_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_floating_dtype_error_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_out_dtype_error_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_out_dtype_error_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_same_dtype_error_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_complex_same_dtype_error_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_polar_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_torch_polar_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_unpack_double_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_unpack_double_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vander_types_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vsplit_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vsplit_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vsplit_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_complex128, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_float64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_int32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_int8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_vstack_row_stack_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_cpu, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_bool, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_complex64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_float16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_float32, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_int16, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_int64, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_dtype_layout_device_match_cpu_uint8, test/test_tensor_creation_ops.py::TestTensorCreationCPU::test_zeros_out_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_normal_cpu_float32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_normal_cpu_float64, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_normal_std_error_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_rand_cpu_complex128, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_rand_cpu_complex32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_rand_cpu_complex64, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_rand_cpu_float32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_rand_cpu_float64, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randint_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randint_distribution_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randint_inference_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_bfloat16, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_complex128, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_complex32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_complex64, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_float16, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_float32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randn_cpu_float64, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_random_neg_values_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randperm_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randperm_device_compatibility_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_randperm_large_cpu, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_uniform_from_to_cpu_float16, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_uniform_from_to_cpu_float32, test/test_tensor_creation_ops.py::TestRandomTensorCreationCPU::test_uniform_from_to_cpu_float64, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_empty_like_cpu, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_full_like_inference_cpu, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_ones_like_cpu, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_ones_like_multiple_device_cpu, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_zeros_like_cpu, test/test_tensor_creation_ops.py::TestLikeTensorCreationCPU::test_zeros_like_multiple_device_cpu, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_byte_to_int_cpu, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_invalid_positional_args_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_non_writable_buffer_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_not_a_buffer_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_requires_grad_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_same_type_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_shared_buffer_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_and_offset_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_count_cpu_uint8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_bool, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_complex128, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_complex64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_float16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_float32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_float64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_int16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_int32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_int64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_int8, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_uint16, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_uint32, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_uint64, test/test_tensor_creation_ops.py::TestBufferProtocolCPU::test_with_offset_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_uint16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_uint32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_uint64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_buffer_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_dlpack_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_uint16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_uint32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_uint64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_numpy_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_alias_from_tensor_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_astensor_consistency_cpu, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_uint16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_uint32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_uint64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_buffer_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_dlpack_mult_devices_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_uint16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_uint32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_uint64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_numpy_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_from_tensor_mult_devices_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_list_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_bfloat16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_bool, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_complex128, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_float16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_float64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_int16, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_int32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_int64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_int8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_copy_tensor_cpu_uint8, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_default_device_cpu, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_device_without_index_cpu, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_numpy_scalars_cpu, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_retain_autograd_history_cpu_complex64, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_retain_autograd_history_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_unsupported_alias_cpu_float32, test/test_tensor_creation_ops.py::TestAsArrayCPU::test_unsupported_alias_mult_devices_cpu_float32 2025-03-17T17:58:59.4203562Z 2025-03-17T17:58:59.4203752Z Running test_nn 1/2 ... [2025-03-17 17:58:59.373121] 2025-03-17T17:58:59.4204150Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T17:58:59.4205181Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_nn.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 17:58:59.373419] 2025-03-17T18:05:48.1601047Z 2025-03-17T18:05:48.1602238Z test_nn 1/2 was successful, full logs can be found in artifacts with path test/test-reports/test_nn_1.2_f503d55d50a7756d_.log 2025-03-17T18:05:48.2175971Z Running 1055 items in this shard: test/test_nn.py::TestNN::test_AdaptiveLogSoftmax, test/test_nn.py::TestNN::test_AdaptiveLogSoftmax_cuda, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_none, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_BCELoss_no_batch_dim_sum_cuda_half, test/test_nn.py::TestNN::test_BCELoss_no_reduce_cuda, test/test_nn.py::TestNN::test_BCELoss_weights_no_reduce, test/test_nn.py::TestNN::test_BCELoss_weights_no_reduce_cuda, test/test_nn.py::TestNN::test_BCELoss_weights_no_reduce_scalar_cuda, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_legacy_enum_cuda, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_batch_dim_none_cuda_float, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_BCEWithLogitsLoss_no_reduce_scalar_cuda, test/test_nn.py::TestNN::test_CELU_no_batch_dim, test/test_nn.py::TestNN::test_CELU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_CTCLoss_zero_lengths, test/test_nn.py::TestNN::test_Conv1d, test/test_nn.py::TestNN::test_Conv1d_circular_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv1d_dilated, test/test_nn.py::TestNN::test_Conv1d_dilated_cuda, test/test_nn.py::TestNN::test_Conv1d_groups_cuda, test/test_nn.py::TestNN::test_Conv1d_pad1, test/test_nn.py::TestNN::test_Conv1d_pad1_cuda, test/test_nn.py::TestNN::test_Conv1d_pad1size1_cuda, test/test_nn.py::TestNN::test_Conv1d_pad2_cuda, test/test_nn.py::TestNN::test_Conv1d_pad_same2, test/test_nn.py::TestNN::test_Conv1d_pad_same_cuda, test/test_nn.py::TestNN::test_Conv1d_pad_same_dilated, test/test_nn.py::TestNN::test_Conv1d_pad_same_dilated_cuda, test/test_nn.py::TestNN::test_Conv1d_pad_valid_cuda, test/test_nn.py::TestNN::test_Conv1d_replicate_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv1d_stride, test/test_nn.py::TestNN::test_Conv1d_zero_batch_cuda, test/test_nn.py::TestNN::test_Conv1d_zeros_stride2_pad2, test/test_nn.py::TestNN::test_Conv2d_circular_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv2d_cuda, test/test_nn.py::TestNN::test_Conv2d_depthwise, test/test_nn.py::TestNN::test_Conv2d_depthwise_cuda, test/test_nn.py::TestNN::test_Conv2d_depthwise_dilated, test/test_nn.py::TestNN::test_Conv2d_depthwise_padded_cuda, test/test_nn.py::TestNN::test_Conv2d_depthwise_strided, test/test_nn.py::TestNN::test_Conv2d_depthwise_strided_cuda, test/test_nn.py::TestNN::test_Conv2d_depthwise_with_multiplier_cuda, test/test_nn.py::TestNN::test_Conv2d_dilated, test/test_nn.py::TestNN::test_Conv2d_dilated_cuda, test/test_nn.py::TestNN::test_Conv2d_dilated_with_long_tensor, test/test_nn.py::TestNN::test_Conv2d_groups, test/test_nn.py::TestNN::test_Conv2d_groups_cuda, test/test_nn.py::TestNN::test_Conv2d_groups_thnn, test/test_nn.py::TestNN::test_Conv2d_groups_thnn_cuda, test/test_nn.py::TestNN::test_Conv2d_groups_thnn_with_long_tensor, test/test_nn.py::TestNN::test_Conv2d_groups_with_long_tensor, test/test_nn.py::TestNN::test_Conv2d_no_bias, test/test_nn.py::TestNN::test_Conv2d_no_bias_with_long_tensor, test/test_nn.py::TestNN::test_Conv2d_pad_same, test/test_nn.py::TestNN::test_Conv2d_pad_same_cuda, test/test_nn.py::TestNN::test_Conv2d_reflect_stride2_pad2, test/test_nn.py::TestNN::test_Conv2d_replicate_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv2d_strided_cuda, test/test_nn.py::TestNN::test_Conv2d_with_long_tensor_cuda, test/test_nn.py::TestNN::test_Conv2d_zero_batch_with_long_tensor, test/test_nn.py::TestNN::test_Conv2d_zero_batch_with_long_tensor_cuda, test/test_nn.py::TestNN::test_Conv2d_zeros_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv3d_1x1x1_no_bias_cuda, test/test_nn.py::TestNN::test_Conv3d_circular_stride2_pad2, test/test_nn.py::TestNN::test_Conv3d_circular_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv3d_cuda, test/test_nn.py::TestNN::test_Conv3d_dilated_strided_cuda, test/test_nn.py::TestNN::test_Conv3d_groups_cuda, test/test_nn.py::TestNN::test_Conv3d_groups_with_long_tensor_cuda, test/test_nn.py::TestNN::test_Conv3d_no_bias_cuda, test/test_nn.py::TestNN::test_Conv3d_no_bias_with_long_tensor, test/test_nn.py::TestNN::test_Conv3d_pad_same, test/test_nn.py::TestNN::test_Conv3d_pad_same_cuda, test/test_nn.py::TestNN::test_Conv3d_pad_valid, test/test_nn.py::TestNN::test_Conv3d_pad_valid_cuda, test/test_nn.py::TestNN::test_Conv3d_replicate_stride2_pad2_cuda, test/test_nn.py::TestNN::test_Conv3d_stride, test/test_nn.py::TestNN::test_Conv3d_stride_padding, test/test_nn.py::TestNN::test_Conv3d_stride_padding_cuda, test/test_nn.py::TestNN::test_Conv3d_stride_padding_with_long_tensor_cuda, test/test_nn.py::TestNN::test_Conv3d_with_long_tensor, test/test_nn.py::TestNN::test_Conv3d_zero_batch, test/test_nn.py::TestNN::test_Conv3d_zero_batch_cuda, test/test_nn.py::TestNN::test_Conv3d_zeros_stride2_pad2_cuda, test/test_nn.py::TestNN::test_ConvTranspose1d, test/test_nn.py::TestNN::test_ConvTranspose1d_dilated, test/test_nn.py::TestNN::test_ConvTranspose1d_dilated_cuda, test/test_nn.py::TestNN::test_ConvTranspose1d_groups, test/test_nn.py::TestNN::test_ConvTranspose1d_no_bias, test/test_nn.py::TestNN::test_ConvTranspose1d_no_bias_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_dilated_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_groups, test/test_nn.py::TestNN::test_ConvTranspose2d_groups_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_groups_with_long_tensor, test/test_nn.py::TestNN::test_ConvTranspose2d_groups_with_long_tensor_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_no_bias, test/test_nn.py::TestNN::test_ConvTranspose2d_no_bias_with_long_tensor_cuda, test/test_nn.py::TestNN::test_ConvTranspose2d_with_long_tensor, test/test_nn.py::TestNN::test_ConvTranspose2d_with_long_tensor_cuda, test/test_nn.py::TestNN::test_ConvTranspose3d_cuda, test/test_nn.py::TestNN::test_ConvTranspose3d_dilated, test/test_nn.py::TestNN::test_CosineEmbeddingLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_CosineEmbeddingLoss_no_batch_dim_none, test/test_nn.py::TestNN::test_CosineEmbeddingLoss_no_batch_dim_none_cuda_float, test/test_nn.py::TestNN::test_CosineEmbeddingLoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_CosineEmbeddingLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_CrossMapLRN2d, test/test_nn.py::TestNN::test_ELU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_EmbeddingBag_discontiguous_cuda, test/test_nn.py::TestNN::test_EmbeddingBag_max_cuda, test/test_nn.py::TestNN::test_EmbeddingBag_mean, test/test_nn.py::TestNN::test_EmbeddingBag_mean_cuda, test/test_nn.py::TestNN::test_EmbeddingBag_mean_padding_idx, test/test_nn.py::TestNN::test_EmbeddingBag_sparse, test/test_nn.py::TestNN::test_EmbeddingBag_sparse_cuda, test/test_nn.py::TestNN::test_EmbeddingBag_sum, test/test_nn.py::TestNN::test_Embedding_discontiguous, test/test_nn.py::TestNN::test_Embedding_discontiguous_cuda, test/test_nn.py::TestNN::test_Embedding_sparse, test/test_nn.py::TestNN::test_Flatten_cuda, test/test_nn.py::TestNN::test_Fold_cuda, test/test_nn.py::TestNN::test_Fold_no_batch_dim_input, test/test_nn.py::TestNN::test_Fold_no_batch_dim_int_input, test/test_nn.py::TestNN::test_Fold_no_batch_dim_int_input_cuda, test/test_nn.py::TestNN::test_GELU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Hardshrink_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Hardswish_no_batch_dim, test/test_nn.py::TestNN::test_HingeEmbeddingLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_HingeEmbeddingLoss_no_batch_dim_none, test/test_nn.py::TestNN::test_HingeEmbeddingLoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_HingeEmbeddingLoss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_none, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_none_cuda_float, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_HuberLoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_KLDivLoss_batch_mean, test/test_nn.py::TestNN::test_KLDivLoss_batch_mean_log_target, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_none, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_KLDivLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_KLDivLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_KLDivLoss_no_reduce_scalar, test/test_nn.py::TestNN::test_KLDivLoss_no_reduce_scalar_log_target, test/test_nn.py::TestNN::test_KLDivLoss_no_reduce_scalar_log_target_cuda, test/test_nn.py::TestNN::test_KLDivLoss_with_log_target_no_reduce, test/test_nn.py::TestNN::test_KLDivLoss_with_target_no_reduce, test/test_nn.py::TestNN::test_KLDivLoss_with_target_no_reduce_cuda, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_mean, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_none, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_sum, test/test_nn.py::TestNN::test_L1Loss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_L1Loss_no_reduce, test/test_nn.py::TestNN::test_L1Loss_no_reduce_complex_cuda, test/test_nn.py::TestNN::test_L1Loss_no_reduce_scalar, test/test_nn.py::TestNN::test_LSTM_cell, test/test_nn.py::TestNN::test_LSTM_cell_forward_input_size, test/test_nn.py::TestNN::test_LayerNorm_3d_no_affine_large_feature_cuda, test/test_nn.py::TestNN::test_LeakyReLU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Linear_no_bias_cuda, test/test_nn.py::TestNN::test_LogSigmoid_no_batch_dim_cuda, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_none, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_MSELoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_MSELoss_no_reduce, test/test_nn.py::TestNN::test_MarginRankingLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_MarginRankingLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_MarginRankingLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_MarginRankingLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_MarginRankingLoss_no_batch_dim_sum_cuda_half, test/test_nn.py::TestNN::test_MaxUnpool1d_net_cuda, test/test_nn.py::TestNN::test_MaxUnpool1d_net_no_batch_dim, test/test_nn.py::TestNN::test_MaxUnpool1d_net_no_batch_dim_cuda, test/test_nn.py::TestNN::test_MaxUnpool2d_net_cuda, test/test_nn.py::TestNN::test_MaxUnpool2d_net_no_batch_dim, test/test_nn.py::TestNN::test_MaxUnpool3d_net, test/test_nn.py::TestNN::test_MaxUnpool3d_net_cuda, test/test_nn.py::TestNN::test_MaxUnpool3d_net_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Mish_no_batch_dim_cuda, test/test_nn.py::TestNN::test_ModuleList, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_1d_no_reduce, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_index_neg_cuda, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_none_cuda_float, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_batch_dim_sum_cuda_half, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_reduce, test/test_nn.py::TestNN::test_MultiLabelMarginLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_none, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_reduce, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_weights_no_reduce, test/test_nn.py::TestNN::test_MultiLabelSoftMarginLoss_weights_no_reduce_cuda, test/test_nn.py::TestNN::test_MultiMarginLoss_1d_no_reduce, test/test_nn.py::TestNN::test_MultiMarginLoss_margin_no_reduce, test/test_nn.py::TestNN::test_MultiMarginLoss_weights_no_reduce, test/test_nn.py::TestNN::test_MultiMarginLoss_weights_no_reduce_cuda, test/test_nn.py::TestNN::test_NLLLoss2d_no_reduce_cuda, test/test_nn.py::TestNN::test_NLLLoss2d_no_reduce_weights, test/test_nn.py::TestNN::test_NLLLossNd_no_reduce_ignore_index, test/test_nn.py::TestNN::test_NLLLossNd_no_reduce_weights, test/test_nn.py::TestNN::test_NLLLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_NLLLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_NLLLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_NLLLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_NLLLoss_no_batch_dim_none_cuda_double, test/test_nn.py::TestNN::test_NLLLoss_no_reduce, test/test_nn.py::TestNN::test_NLLLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_NLLLoss_no_reduce_ignore_index, test/test_nn.py::TestNN::test_NLLLoss_no_reduce_weights_ignore_index, test/test_nn.py::TestNN::test_NLLLoss_no_reduce_weights_ignore_index_cuda, test/test_nn.py::TestNN::test_NLLLoss_no_reduce_weights_ignore_index_neg_cuda, test/test_nn.py::TestNN::test_PReLU_no_batch_dim, test/test_nn.py::TestNN::test_PReLU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_PairwiseDistance, test/test_nn.py::TestNN::test_PairwiseDistance_with_non_default_args_cuda, test/test_nn.py::TestNN::test_ParameterDict, test/test_nn.py::TestNN::test_ParameterList, test/test_nn.py::TestNN::test_ParameterList_meta, test/test_nn.py::TestNN::test_PixelShuffle_cuda, test/test_nn.py::TestNN::test_PixelUnshuffle, test/test_nn.py::TestNN::test_PixelUnshuffle_cuda, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_batch_dim_sum_cuda_half, test/test_nn.py::TestNN::test_PoissonNLLLoss_no_reduce, test/test_nn.py::TestNN::test_RNN_cell_forward_zero_hidden_size, test/test_nn.py::TestNN::test_RNN_cell_no_broadcasting, test/test_nn.py::TestNN::test_RNN_dropout_state, test/test_nn.py::TestNN::test_RReLU_cuda, test/test_nn.py::TestNN::test_RReLU_no_batch_dim, test/test_nn.py::TestNN::test_RReLU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_RReLU_with_up_down_cuda, test/test_nn.py::TestNN::test_RReLU_with_up_down_scalar, test/test_nn.py::TestNN::test_ReLU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_ReplicationPad3d_complex_cuda, test/test_nn.py::TestNN::test_ReplicationPad3d_cuda, test/test_nn.py::TestNN::test_ReplicationPad3d_no_batch_dim_cuda, test/test_nn.py::TestNN::test_SELU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Sequential_extend, test/test_nn.py::TestNN::test_Sequential_getitem, test/test_nn.py::TestNN::test_Sequential_iadd, test/test_nn.py::TestNN::test_Sequential_rmul, test/test_nn.py::TestNN::test_SiLU_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Sigmoid_no_batch_dim_cuda, test/test_nn.py::TestNN::test_SmoothL1Loss_beta, test/test_nn.py::TestNN::test_SmoothL1Loss_beta_cuda, test/test_nn.py::TestNN::test_SmoothL1Loss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_SmoothL1Loss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_SmoothL1Loss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_SmoothL1Loss_no_batch_dim_none, test/test_nn.py::TestNN::test_SmoothL1Loss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_SmoothL1Loss_no_reduce, test/test_nn.py::TestNN::test_SmoothL1Loss_no_reduce_cuda, test/test_nn.py::TestNN::test_SmoothL1Loss_no_reduce_scalar_cuda, test/test_nn.py::TestNN::test_SmoothL1Loss_zero_beta_cuda, test/test_nn.py::TestNN::test_SoftMarginLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_SoftMarginLoss_no_batch_dim_mean_cuda_float, test/test_nn.py::TestNN::test_SoftMarginLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_SoftMarginLoss_no_reduce_cuda, test/test_nn.py::TestNN::test_Softplus_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Softshrink_no_batch_dim, test/test_nn.py::TestNN::test_Softsign_no_batch_dim_cuda, test/test_nn.py::TestNN::test_Tanhshrink_no_batch_dim_cuda, test/test_nn.py::TestNN::test_TransformerDecoderLayer_relu_activation_cuda, test/test_nn.py::TestNN::test_TransformerEncoderLayer_gelu_activation, test/test_nn.py::TestNN::test_TransformerEncoderLayer_relu_activation_cuda, test/test_nn.py::TestNN::test_Transformer_cell, test/test_nn.py::TestNN::test_Transformer_multilayer_coder, test/test_nn.py::TestNN::test_Transformer_multilayer_coder_cuda, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_mean, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_mean_cuda_double, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_mean_cuda_half, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_none_cuda_float, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_none_cuda_half, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_sum, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_sum_cuda_double, test/test_nn.py::TestNN::test_TripletMarginLoss_no_batch_dim_sum_cuda_float, test/test_nn.py::TestNN::test_Unfold, test/test_nn.py::TestNN::test_Unfold_cuda, test/test_nn.py::TestNN::test_add_module_raises_error_if_attr_exists, test/test_nn.py::TestNN::test_affine_grid_3d, test/test_nn.py::TestNN::test_affine_grid_backward_cl_cf_consistency_device_cpu_nd_2, test/test_nn.py::TestNN::test_affine_grid_error_checking, test/test_nn.py::TestNN::test_assignment, test/test_nn.py::TestNN::test_batch_norm_update_stats, test/test_nn.py::TestNN::test_batchnorm_buffer_update_when_stats_are_not_tracked, test/test_nn.py::TestNN::test_batchnorm_cudnn_half, test/test_nn.py::TestNN::test_batchnorm_cudnn_nhwc, test/test_nn.py::TestNN::test_batchnorm_load_state_dict, test/test_nn.py::TestNN::test_batchnorm_nhwc_cpu, test/test_nn.py::TestNN::test_batchnorm_nhwc_cuda, test/test_nn.py::TestNN::test_batchnorm_non_contig_cpu_BatchNorm2d, test/test_nn.py::TestNN::test_batchnorm_nonaffine_cuda_half_input, test/test_nn.py::TestNN::test_batchnorm_raises_error_if_bias_is_not_same_size_as_input, test/test_nn.py::TestNN::test_batchnorm_raises_error_if_less_than_one_value_per_channel, test/test_nn.py::TestNN::test_bce_with_logits_broadcasts_weights, test/test_nn.py::TestNN::test_bce_with_logits_gives_same_result_as_sigmoid_and_bce_loss, test/test_nn.py::TestNN::test_bce_with_logits_gives_same_result_as_sigmoid_and_bce_loss_large_tensors_with_grad, test/test_nn.py::TestNN::test_bce_with_logits_has_correct_forward_grad, test/test_nn.py::TestNN::test_bce_with_logits_ones_in_pos_weights_are_the_same_as_none, test/test_nn.py::TestNN::test_bce_with_logits_with_pos_weight_has_correct_grad_at_zero, test/test_nn.py::TestNN::test_bilinear, test/test_nn.py::TestNN::test_broadcast_double_backwards_gpu, test/test_nn.py::TestNN::test_broadcast_no_grad, test/test_nn.py::TestNN::test_buffer_bad_module_subclass, test/test_nn.py::TestNN::test_buffer_not_persistent_load, test/test_nn.py::TestNN::test_buffers_and_named_buffers, test/test_nn.py::TestNN::test_cosine_embedding_loss_margin_no_reduce, test/test_nn.py::TestNN::test_cosine_embedding_loss_no_reduce, test/test_nn.py::TestNN::test_cosine_embedding_loss_with_diff_type, test/test_nn.py::TestNN::test_cross_entropy_loss_precision, test/test_nn.py::TestNN::test_cudnn_rnn_dropout_states_device, test/test_nn.py::TestNN::test_cudnn_weight_tying, test/test_nn.py::TestNN::test_extra_state, test/test_nn.py::TestNN::test_extra_state_missing_set_extra_state, test/test_nn.py::TestNN::test_fb_fc_packed, test/test_nn.py::TestNN::test_flatten, test/test_nn.py::TestNN::test_fractional_max_pool2d_invalid_output_ratio, test/test_nn.py::TestNN::test_gaussian_nll_loss_args, test/test_nn.py::TestNN::test_gaussian_nll_loss_broadcasting, test/test_nn.py::TestNN::test_gaussian_nll_loss_scalar_var, test/test_nn.py::TestNN::test_get_buffer, test/test_nn.py::TestNN::test_grid_sample, test/test_nn.py::TestNN::test_grid_sample_3d, test/test_nn.py::TestNN::test_grid_sample_error_checking, test/test_nn.py::TestNN::test_hardtanh_backward, test/test_nn.py::TestNN::test_hardtanh_inplace_gradgrad, test/test_nn.py::TestNN::test_huber_loss_invalid_delta, test/test_nn.py::TestNN::test_inplace_thnn, test/test_nn.py::TestNN::test_interpolate_bicubic_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bicubic_2d_zero_dim, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_2d, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_tuple_shared_2d, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_tuple_shared_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_tuple_skewed_2d_align_corners, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_tuple_skewed_2d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_bicubic_scale_tuple_skewed_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bicubic_tuple_2d, test/test_nn.py::TestNN::test_interpolate_bicubic_tuple_2d_align_corners, test/test_nn.py::TestNN::test_interpolate_bicubic_tuple_2d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_bicubic_tuple_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bilinear_2d, test/test_nn.py::TestNN::test_interpolate_bilinear_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bilinear_2d_zero_dim, test/test_nn.py::TestNN::test_interpolate_bilinear_scale_tuple_skewed_2d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_bilinear_scale_tuple_skewed_2d_cuda, test/test_nn.py::TestNN::test_interpolate_bilinear_tuple_2d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_linear_1d_align_corners, test/test_nn.py::TestNN::test_interpolate_linear_1d_cuda, test/test_nn.py::TestNN::test_interpolate_linear_1d_zero_dim, test/test_nn.py::TestNN::test_interpolate_linear_1d_zero_dim_cuda, test/test_nn.py::TestNN::test_interpolate_linear_scale_1d_align_corners, test/test_nn.py::TestNN::test_interpolate_linear_scale_1d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_linear_tuple_1d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_1d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_1d_zero_dim, test/test_nn.py::TestNN::test_interpolate_nearest_2d_launch_configs, test/test_nn.py::TestNN::test_interpolate_nearest_2d_launch_configs_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_2d_zero_dim, test/test_nn.py::TestNN::test_interpolate_nearest_3d, test/test_nn.py::TestNN::test_interpolate_nearest_3d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_3d_zero_dim, test/test_nn.py::TestNN::test_interpolate_nearest_3d_zero_dim_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_scale_1d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_scale_2d, test/test_nn.py::TestNN::test_interpolate_nearest_scale_2d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_scale_3d, test/test_nn.py::TestNN::test_interpolate_nearest_scale_3d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_tuple_1d, test/test_nn.py::TestNN::test_interpolate_nearest_tuple_1d_cuda, test/test_nn.py::TestNN::test_interpolate_nearest_tuple_2d, test/test_nn.py::TestNN::test_interpolate_nearest_tuple_3d_cuda, test/test_nn.py::TestNN::test_interpolate_trilinear_3d, test/test_nn.py::TestNN::test_interpolate_trilinear_3d_zero_dim_cuda, test/test_nn.py::TestNN::test_interpolate_trilinear_scale_3d_align_corners, test/test_nn.py::TestNN::test_interpolate_trilinear_tuple_3d_align_corners, test/test_nn.py::TestNN::test_interpolate_trilinear_tuple_3d_align_corners_cuda, test/test_nn.py::TestNN::test_interpolate_trilinear_tuple_3d_cuda, test/test_nn.py::TestNN::test_interpolate_undefined_behavior_casting, test/test_nn.py::TestNN::test_l1_loss_correct, test/test_nn.py::TestNN::test_layer_norm_grads_with_create_graph_flag, test/test_nn.py::TestNN::test_layer_norm_large_tensor, test/test_nn.py::TestNN::test_linear_autograd_device_cpu_bias_weightCSR, test/test_nn.py::TestNN::test_linear_autograd_device_cpu_nobias_weightCOO, test/test_nn.py::TestNN::test_linear_autograd_device_cpu_nobias_weightStrided, test/test_nn.py::TestNN::test_log_softmax_scalar, test/test_nn.py::TestNN::test_log_softmax_spatial_special_cuda, test/test_nn.py::TestNN::test_loss_equal_input_target_shape, test/test_nn.py::TestNN::test_margin_ranking_loss_no_reduce, test/test_nn.py::TestNN::test_module_backcompat, test/test_nn.py::TestNN::test_module_super_init, test/test_nn.py::TestNN::test_modules, test/test_nn.py::TestNN::test_multimarginloss_1d_input_0d_target_no_reduce, test/test_nn.py::TestNN::test_named_modules, test/test_nn.py::TestNN::test_named_parameters_remove_duplicate, test/test_nn.py::TestNN::test_nested_tensor_from_mask, test/test_nn.py::TestNN::test_overwrite_module_params_on_conversion, test/test_nn.py::TestNN::test_pack_sequence_batch_sizes_throw, test/test_nn.py::TestNN::test_padding_list, test/test_nn.py::TestNN::test_parameterlistdict_setting_attributes, test/test_nn.py::TestNN::test_pdist_empty_col, test/test_nn.py::TestNN::test_pickle_module_no_weights_only_warning, test/test_nn.py::TestNN::test_pixel_shuffle_nhwc_cpu, test/test_nn.py::TestNN::test_pixel_shuffle_unshuffle, test/test_nn.py::TestNN::test_pointwise_loss_broadcast, test/test_nn.py::TestNN::test_projections_errors_on_gru_and_rnn, test/test_nn.py::TestNN::test_projections_lstm_args_check, test/test_nn.py::TestNN::test_projections_lstm_initial_hidden_state, test/test_nn.py::TestNN::test_register_buffer_allows_overwriting_with_same_name, test/test_nn.py::TestNN::test_register_buffer_raises_error_if_attr_exists, test/test_nn.py::TestNN::test_register_parameter_raises_error_if_name_is_not_string, test/test_nn.py::TestNN::test_relu_inplace_on_view, test/test_nn.py::TestNN::test_rnn_check_device, test/test_nn.py::TestNN::test_rnn_initial_hidden_state, test/test_nn.py::TestNN::test_rnn_weight_norm, test/test_nn.py::TestNN::test_set_submodule, test/test_nn.py::TestNN::test_smoothl1loss_intergral_target, test/test_nn.py::TestNN::test_softmax_functional_dim0, test/test_nn.py::TestNN::test_softmax_functional_dim0_cuda, test/test_nn.py::TestNN::test_softmax_functional_dim3, test/test_nn.py::TestNN::test_softmax_lastdim, test/test_nn.py::TestNN::test_softmax_lastdim_dtype, test/test_nn.py::TestNN::test_softmax_spatial, test/test_nn.py::TestNN::test_softmax_spatial_dtype_cuda, test/test_nn.py::TestNN::test_softmax_spatial_special, test/test_nn.py::TestNN::test_softmin, test/test_nn.py::TestNN::test_spectral_norm, test/test_nn.py::TestNN::test_spectral_norm_dim, test/test_nn.py::TestNN::test_spectral_norm_forward, test/test_nn.py::TestNN::test_spectral_norm_load_state_dict, test/test_nn.py::TestNN::test_spectral_norm_pickle, test/test_nn.py::TestNN::test_state_dict, test/test_nn.py::TestNN::test_swap_module_params_poisons_acc_grad, test/test_nn.py::TestNN::test_to, test/test_nn.py::TestNN::test_transformer_args_check, test/test_nn.py::TestNN::test_transformerdecoderlayer_gelu, test/test_nn.py::TestNN::test_triplet_margin_loss_no_reduce, test/test_nn.py::TestNN::test_triplet_margin_loss_swap, test/test_nn.py::TestNN::test_type, test/test_nn.py::TestNN::test_unflatten, test/test_nn.py::TestNN::test_unfold_invalid_arg, test/test_nn.py::TestNN::test_upsamplingBilinear2d_spatial_invariance, test/test_nn.py::TestNN::test_upsamplingLinear1d_spatial_invariance, test/test_nn.py::TestNN::test_upsampling_bfloat16, test/test_nn.py::TestNN::test_upsampling_not_recompute_scale_factor, test/test_nn.py::TestNN::test_upsampling_small_scale, test/test_nn.py::TestNN::test_weighted_huber_loss, test/test_nn.py::TestNN::test_weighted_l1_loss_with_weights, test/test_nn.py::TestNN::test_weighted_mse_loss, test/test_nn.py::TestFusionEval::test_fuse_module_eval_numerics, test/test_nn.py::TestConstantPadNd::test_constant_pad_nd, test/test_nn.py::TestAddRelu::test_add_relu, test/test_nn.py::TestFunctionalPickle::test_pickle_softsign, test/test_nn.py::TestFusionUtils::test_fuse_linear_bn_requires_grad, test/test_nn.py::TestUtils::test_consume_prefix_in_state_dict_if_present, test/test_nn.py::TestNNDeviceTypeCPU::test_CTCLoss_no_batch_dim_reduction_mean_use_module_form_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_CTCLoss_no_batch_dim_reduction_none_use_module_form_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_CTCLoss_no_batch_dim_reduction_sum_use_module_form_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_CTCLoss_no_batch_dim_reduction_sum_use_module_form_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_GroupNorm_empty_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_GroupNorm_general_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_GroupNorm_memory_format_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_GroupNorm_numeric_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_InstanceNorm1d_general_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_InstanceNorm2d_general_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_InstanceNorm3d_general_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_LSTM_differentiable_backward_using_oneDNN_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_LSTM_grad_and_gradgrad_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_LayerNorm_general_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_MarginLoss_empty_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_MarginLoss_warnings_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_ReflectionPad2d_large_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_ReflectionPad_empty_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_ReplicationPad2d_large_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_ReplicationPad_empty_cpu_complex128, test/test_nn.py::TestNNDeviceTypeCPU::test_TransformerEncoder_empty_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_Unfold_empty_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_activations_bfloat16_half_cpu_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_activations_bfloat16_half_cpu_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_adaptiveavg_pool1d_shmem_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_affine_2d_rotate0_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_affine_3d_rotateRandom_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_avg_pool_large_tensor2_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_avg_pool_large_tensor_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_affine_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_affine_mixed_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_affine_mixed_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_eval_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_eval_mixed_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_eval_mixed_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_grad_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_batchnorm_simple_average_mixed_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_False_norm_type_0_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_False_norm_type_1_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_False_norm_type_2_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_False_norm_type_inf_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_True_norm_type_0_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_foreach_True_norm_type_2_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_norm_multi_device_foreach_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_value_foreach_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_clip_grad_value_foreach_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_conv_empty_input_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_conv_empty_input_cpu_complex128, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_64bit_reduction_none_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_label_smoothing_errors_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_label_smoothing_weight_ignore_indices_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_large_tensor_reduction_none_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_large_tensor_reduction_sum_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_loss_one_hot_target_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_loss_prob_target_no_batch_dim_reduction_mean_weighted_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_loss_prob_target_no_batch_dim_reduction_none_weighted_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_loss_prob_target_no_batch_dim_reduction_sum_weighted_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_cross_entropy_loss_prob_target_no_batch_dim_reduction_sum_weighted_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_ctc_loss_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_ctc_loss_cudnn_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_elu_inplace_overlap_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_elu_inplace_with_neg_alpha_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_glu_bfloat16_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_grid_sample_nan_inf_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_groupnorm_nhwc_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_groupnorm_nhwc_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_groupnorm_nhwc_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_gumbel_softmax_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_gumbel_softmax_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_hardswish_inplace_overlap_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_for_single_spatial_element_during_training_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_if_input_channels_is_not_num_features_InstanceNorm2d_no_batch_dim_False_affine_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_if_input_channels_is_not_num_features_InstanceNorm3d_no_batch_dim_False_affine_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_if_input_channels_is_not_num_features_InstanceNorm3d_no_batch_dim_False_affine_True_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_if_input_channels_is_not_num_features_InstanceNorm3d_no_batch_dim_True_affine_False_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_instancenorm_raises_error_if_less_than_one_value_per_channel_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_invalid_reduction_strings_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_leaky_relu_inplace_overlap_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_leaky_relu_inplace_with_neg_slope_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_leaky_relu_inplace_with_zero_slope_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_linear_empty_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_log_softmax_big_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_log_softmax_cpu_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_masked_softmax_forward_with_nans_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_masked_softmax_transformer_layout_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_module_to_empty_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_module_to_empty_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_byte_target_matches_long_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_invalid_weights_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_large_tensor_reduction_mean_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_large_tensor_reduction_none_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_out_of_bounds_ignore_index_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nll_loss_total_weight_is_zero_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nn_scalars_reductions_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_nonlinearity_propagate_nan_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_one_hot_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_overwrite_module_params_on_conversion_cpu_device_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_pad_cpu_complex128, test/test_nn.py::TestNNDeviceTypeCPU::test_prelu_backward_32bit_indexing_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_rmsnorm_numeric_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_rnn_fused_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_skip_init_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_smooth_l1_loss_vs_huber_loss_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_softmax_backward_unaligned_grad_output_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_softmax_cpu_cpu_bfloat16, test/test_nn.py::TestNNDeviceTypeCPU::test_softmax_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_softmax_double_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_softmax_results_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_threshold_inplace_overlap_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_to_complex_cpu_complex128, test/test_nn.py::TestNNDeviceTypeCPU::test_to_complex_cpu_complex64, test/test_nn.py::TestNNDeviceTypeCPU::test_to_complex_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_transformerencoderlayer_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_transformerencoderlayer_fast_path_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_triplet_margin_with_distance_loss_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_triplet_margin_with_distance_loss_default_parity_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiLinear2d_consistency_interp_size_bug_memory_format0_align_corners_False_input_size_403_output_size_377_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiLinear2d_consistency_interp_size_bug_memory_format1_align_corners_False_input_size_399_output_size_437_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiLinear2d_consistency_interp_size_bug_memory_format1_align_corners_True_input_size_399_output_size_437_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiLinear2d_consistency_interp_size_bug_memory_format1_align_corners_True_input_size_403_output_size_377_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_False_align_corners_False_mode_bicubic_memory_format0_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_False_align_corners_True_mode_bicubic_memory_format0_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_False_align_corners_True_mode_bilinear_memory_format0_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_True_align_corners_False_mode_bicubic_memory_format0_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_True_align_corners_False_mode_bilinear_memory_format1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_True_align_corners_True_mode_bicubic_memory_format1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_antialias_True_align_corners_True_mode_bilinear_memory_format1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format0_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bicubic_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_False_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_False_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_3_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_False_non_contig_sliced_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_32_check_as_unsqueezed_3d_tensor_True_non_contig_sliced_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_False_non_contig_restrided_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_False_batch_size_5_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_consistency_memory_format1_mode_bilinear_antialias_True_align_corners_True_num_channels_5_output_size_600_check_as_unsqueezed_3d_tensor_True_non_contig_restrided_batch_size_1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bicubic_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bicubic_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bicubic_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bilinear_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bilinear_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_bilinear_int8_cpu_int8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest-exact_float64_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest-exact_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest-exact_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest-exact_int8_cpu_int8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_3_mode_nearest_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bicubic_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bicubic_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bicubic_int8_cpu_int8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bilinear_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bilinear_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bilinear_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_bilinear_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest-exact_float64_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest-exact_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest-exact_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest_float64_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_False_num_channels_5_mode_nearest_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bicubic_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bicubic_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bicubic_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bicubic_int8_cpu_int8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bicubic_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bilinear_float64_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bilinear_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bilinear_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_bilinear_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest-exact_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest-exact_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest-exact_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest_float64_cpu_float64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest_int64_cpu_int64, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_3_mode_nearest_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_bicubic_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_bicubic_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_bilinear_int16_cpu_int16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_bilinear_int8_cpu_int8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_nearest-exact_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_nearest-exact_int32_cpu_int32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_nearest-exact_uint8_cpu_uint8, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBiMode2d_nonsupported_dtypes_antialias_True_num_channels_5_mode_nearest_float32_cpu_float32, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBicubic2d_correctness_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingBilinear2d_aa_correctness_memory_format1_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest1d_correctness_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_correctness_memory_format0_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_correctness_memory_format1_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_launch_fail_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_launch_rocm_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_memory_format0_mode_nearest_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_memory_format1_mode_nearest-exact_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest2d_memory_format1_mode_nearest_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest3d_correctness_memory_format0_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest3d_correctness_memory_format0_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest3d_correctness_memory_format1_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearest3d_launch_config_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact1d_correctness_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact1d_correctness_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact2d_correctness_memory_format0_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact2d_correctness_memory_format1_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact3d_correctness_memory_format0_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact3d_correctness_memory_format0_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact3d_correctness_memory_format1_isize_10_osize_15_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingNearestExact3d_correctness_memory_format1_isize_20_osize_11_cpu, test/test_nn.py::TestNNDeviceTypeCPU::test_upsampling_64bit_indexing_channels_last_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_upsamplingnearest2d_backward_64bit_indexing_cpu_float16, test/test_nn.py::TestNNDeviceTypeCPU::test_variable_sequence_cpu_float32 2025-03-17T18:05:48.2732626Z 2025-03-17T18:05:48.2732849Z Running nn/test_pooling 1/1 ... [2025-03-17 18:05:48.162280] 2025-03-17T18:05:48.2733275Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:05:48.2734374Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'nn/test_pooling.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:05:48.162586] 2025-03-17T18:06:48.8637256Z 2025-03-17T18:06:48.8638306Z nn/test_pooling 1/1 was successful, full logs can be found in artifacts with path test/test-reports/nn.test_pooling_1.1_a3bff201591fb55b_.log 2025-03-17T18:06:48.8679263Z Running 104 items in this shard: test/nn/test_pooling.py::TestAvgPool::test_avg_pool1d_ceil_mode, test/nn/test_pooling.py::TestAvgPool::test_avg_pool2d_ceil_mode, test/nn/test_pooling.py::TestAvgPool::test_avg_pool3d_ceil_mode, test/nn/test_pooling.py::TestAvgPool::test_doubletensor_avg_pool2d, test/nn/test_pooling.py::TestAvgPool::test_doubletensor_avg_pool2d_with_divisor, test/nn/test_pooling.py::TestAvgPool::test_doubletensor_avg_pool3d, test/nn/test_pooling.py::TestAvgPool::test_doubletensor_avg_pool3d_with_divisor, test/nn/test_pooling.py::TestPoolingNN::test_MaxUnpool2d_output_size, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_avg_pooling_nhwc_overflow, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_avg_pooling_overflow, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_avg_nhwc, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_avg_nhwc_launch_config_backward, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_avg_nhwc_launch_config_forward, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_avg_nhwc_non_contiguous, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_lower_precision, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_size_none, test/nn/test_pooling.py::TestPoolingNN::test_adaptive_pooling_size_overflow, test/nn/test_pooling.py::TestPoolingNN::test_max_unpool, test/nn/test_pooling.py::TestPoolingNN::test_max_unpool2d_nhwc_cpu, test/nn/test_pooling.py::TestPoolingNN::test_max_unpool3d_input_check, test/nn/test_pooling.py::TestPoolingNN::test_quantized_max_pool1d_empty_kernel, test/nn/test_pooling.py::TestPoolingNN::test_quantized_max_pool3d, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AdaptiveMaxPool1d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AdaptiveMaxPool2d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AdaptiveMaxPool3d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AdaptiveMaxPool_zero_batch_dim_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AvgPool2d_empty_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_AvgPool3d_backward_after_cat_dim1_device_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool2d_zero_batch_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool2d_zero_out_size_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool2d_zero_samples_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool3d_zero_batch_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool3d_zero_out_size_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_FractionalMaxPool3d_zero_samples_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxPool1d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxPool2d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxPool3d_indices_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxPool_zero_batch_dim_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case10_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case1_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case2_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case3_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case4_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case5_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case6_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case7_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case8_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_index_errors_case9_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_MaxUnpool_zero_batch_dim_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_avg_pool2d_output_size_one_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_avg_pool3d_output_size_one_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pool_odd_size_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_backward_fails_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_empty_output_size_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_empty_output_size_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_max_nhwc_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_max_nhwc_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_no_suppot_input_cpu_int16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_no_suppot_input_cpu_int32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_no_suppot_input_cpu_int64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_no_suppot_input_cpu_int8, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_no_suppot_input_cpu_uint8, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_zero_batch_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_adaptive_pooling_zero_batch_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_avg_pool2d_nhwc_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_avg_pool2d_nhwc_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_avg_pool2d_reduced_floating_cpu_bfloat16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_avg_pool2d_reduced_floating_cpu_float16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_fractional_max_pool2d_backward_fails_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_fractional_max_pool2d_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_fractional_max_pool3d_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_fractional_max_pool_nan_inf_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool1d_corner_cases_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool1d_corner_cases_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool1d_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool1d_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_corner_cases_cpu_int32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_corner_cases_cpu_int64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_indices_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_nhwc_cpu_bfloat16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_nhwc_cpu_float16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_nhwc_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_nhwc_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool2d_with_indices_backward_fails_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool3d_ndhwc_cpu_bfloat16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool3d_ndhwc_cpu_float16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool3d_ndhwc_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool3d_ndhwc_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool_bfloat16_half_cpu_bfloat16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool_bfloat16_half_cpu_float16, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_max_pool_nan_inf_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_maxpool3d_non_square_backward_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_maxpool_indices_no_batch_dim_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pool3d_large_size_int64_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pool3d_size_one_feature_dim_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pool_invalid_size_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pool_large_size_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_bfloat16_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_large_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_max_nhwc_cpu_float32, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_max_nhwc_cpu_float64, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_shape_cpu, test/nn/test_pooling.py::TestPoolingNNDeviceTypeCPU::test_pooling_zero_stride_cpu 2025-03-17T18:06:48.8719140Z 2025-03-17T18:06:48.8719327Z Running test_overrides 1/1 ... [2025-03-17 18:06:48.864051] 2025-03-17T18:06:48.8719763Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:06:48.8720833Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_overrides.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:06:48.864359] 2025-03-17T18:08:03.9219750Z 2025-03-17T18:08:03.9221189Z test_overrides 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_overrides_1.1_fb09db68dbd55ada_.log 2025-03-17T18:08:03.9718112Z Running 1467 items in this shard: test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_H___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_T___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__backward_hooks___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__base___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__cdata___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__grad___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__grad_fn___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__post_accumulate_grad_hooks___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase__version___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_data___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_device___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_dtype___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_grad___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_grad_fn___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_imag___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_cpu___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_cuda___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_ipu___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_leaf___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_maia___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_meta___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_mkldnn___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_mps___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_mtia___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_nested___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_quantized___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_sparse___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_sparse_csr___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_vulkan___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_xla___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_is_xpu___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_itemsize___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_layout___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_mH___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_mT___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_name___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_names___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_nbytes___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_ndim___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_output_nr___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_real___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_requires_grad___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_retains_grad___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_shape___get__, test/test_overrides.py::TestTorchFunctionOverride::test_TensorBase_volatile___get__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___add__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___and__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___array__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___array_wrap__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___bool__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___complex__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___contains__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___cuda_array_interface_____get__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___deepcopy__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___div__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___dlpack__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___dlpack_device__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___eq__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___float__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___floordiv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___format__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ge__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___getitem__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___gt__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___iadd__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___iand__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___idiv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ifloordiv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ilshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___imod__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___imul__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___index__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___int__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___invert__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ior__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___irshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___isub__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ixor__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___le__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___len__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___long__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___lshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___lt__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___matmul__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___mod__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___mul__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ne__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___nonzero__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___or__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___radd__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rand__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rdiv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___reduce_ex__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___repr__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___reversed__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rfloordiv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rlshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rmatmul__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rmod__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rmul__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___ror__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rpow__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rrshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rshift__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rsub__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___rxor__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___setitem__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___setstate__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___sub__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___truediv__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor___xor__, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__autocast_to_full_precision, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__autocast_to_reduced_precision, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__clear_non_serializable_cached_data, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__coalesced_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__dimI, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__dimV, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__is_view, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__nested_tensor_size, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__nested_tensor_storage_offsets, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__nested_tensor_strides, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__nnz, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__sparse_mask_projection, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__to_dense, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__update_names, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor__values, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_abs, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_abs_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_absolute, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_absolute_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_acos, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_acos_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_acosh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_acosh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_add, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_add_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addbmm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addbmm_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addcdiv, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addcdiv_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addcmul, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addcmul_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addmm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addmm_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addmv, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addmv_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_addr_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_adjoint, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_align_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_align_to, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_all, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_allclose, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_amax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_amin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_aminmax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_angle, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_any, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_apply_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arccos, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arccos_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arccosh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arccosh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arcsin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arcsin_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arcsinh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arcsinh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctan, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctan2, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctan2_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctan_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctanh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_arctanh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_argmax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_argmin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_argsort, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_argwhere, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_as_strided, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_as_strided_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_as_strided_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_asin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_asin_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_asinh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_asinh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atan, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atan2, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atan2_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atan_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atanh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_atanh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_backward, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_baddbmm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_baddbmm_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bernoulli, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bernoulli_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bfloat16, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bincount, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_and, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_and_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_left_shift, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_left_shift_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_not, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_not_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_or, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_or_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_right_shift, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_right_shift_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_xor, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bitwise_xor_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bmm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_bool, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_broadcast_to, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_byte, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cauchy_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ccol_indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cdouble, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ceil, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ceil_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cfloat, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_chalf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_char, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cholesky, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cholesky_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cholesky_solve, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_chunk, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp_max, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp_max_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp_min, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clamp_min_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clip, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clip_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_clone, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_coalesce, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_col_indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_conj, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_conj_physical, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_conj_physical_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_contiguous, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_copy_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_copysign, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_copysign_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_corrcoef, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cos, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cos_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cosh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cosh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_count_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cov, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cpu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cross, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_crow_indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cuda, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cummax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cummin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cumprod, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cumprod_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cumsum, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_cumsum_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_data_ptr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_deg2rad, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_deg2rad_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dense_dim, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dequantize, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_det, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_detach, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_detach_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diag, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diag_embed, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diagflat, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diagonal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diagonal_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_diff, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_digamma, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_digamma_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dim, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dim_order, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dist, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_div, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_div_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_divide, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_divide_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dot, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_double, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_dsplit, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_element_size, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_eq, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_eq_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_equal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erf_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erfc, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erfc_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erfinv, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_erfinv_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_exp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_exp2, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_exp2_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_exp_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_expand, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_expand_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_expm1, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_expm1_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_exponential_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fill_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fill_diagonal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fix, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fix_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_flatten, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_flip, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fliplr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_flipud, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_float, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_float_power, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_float_power_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_floor, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_floor_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_floor_divide, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_floor_divide_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fmax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fmin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fmod, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_fmod_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_frac, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_frac_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_frexp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_gather, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_gcd, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_gcd_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ge, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ge_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_geometric_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_geqrf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ger, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_get_device, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_greater, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_greater_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_greater_equal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_greater_equal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_gt, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_gt_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_half, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_hardshrink, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_has_names, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_heaviside, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_heaviside_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_histc, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_histogram, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_hsplit, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_hypot, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_hypot_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_i0, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_i0_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_igamma, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_igamma_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_igammac, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_igammac_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_add, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_add_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_copy, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_copy_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_fill, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_fill_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_put, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_put_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_reduce_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_index_select, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_inner, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_int, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_int_repr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ipu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_coalesced, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_complex, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_conj, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_contiguous, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_distributed, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_floating_point, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_inference, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_neg, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_pinned, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_same_size, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_set_to, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_shared, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_is_signed, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isclose, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isfinite, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isinf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isnan, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isneginf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isposinf, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_isreal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_istft, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_item, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_kron, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_kthvalue, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lcm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lcm_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ldexp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ldexp_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_le, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_le_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lerp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lerp_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_less, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_less_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_less_equal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_less_equal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lgamma, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lgamma_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log10, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log10_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log1p, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log1p_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log2, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log2_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log_normal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_log_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logaddexp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logaddexp2, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logcumsumexp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logdet, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_and, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_and_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_not, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_not_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_or, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_or_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_xor, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logical_xor_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logit, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logit_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_logsumexp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_long, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lt, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lt_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_lu_solve, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_map2_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_map_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_masked_fill, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_masked_fill_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_masked_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_masked_scatter_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_masked_select, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_matmul, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_matrix_exp, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_matrix_power, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_max, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_maximum, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mean, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_median, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_min, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_minimum, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mode, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_module_load, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_moveaxis, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_movedim, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_msort, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mtia, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mul, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mul_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_multinomial, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_multiply, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_multiply_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mv, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mvlgamma, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_mvlgamma_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nan_to_num, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nan_to_num_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nanmean, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nanmedian, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nanquantile, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nansum, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_narrow, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_narrow_copy, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ndimension, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ne, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ne_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_neg, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_neg_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_negative, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_negative_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nelement, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nextafter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nextafter_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_nonzero_static, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_norm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_normal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_not_equal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_not_equal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_numel, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_numpy, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_orgqr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ormqr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_outer, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_permute, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_pin_memory, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_pinverse, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_polygamma, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_polygamma_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_positive, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_pow, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_pow_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_prelu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_prod, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_put, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_put_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_q_per_channel_axis, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_q_per_channel_scales, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_q_per_channel_zero_points, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_q_scale, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_q_zero_point, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_qr, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_qscheme, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_quantile, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rad2deg, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rad2deg_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_random_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_ravel, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_reciprocal, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_reciprocal_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_record_stream, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_refine_names, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_register_hook, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_register_post_accumulate_grad_hook, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_relu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_relu_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_remainder, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_remainder_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rename, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rename_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_renorm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_renorm_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_repeat, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_repeat_interleave, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_requires_grad_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_reshape, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_reshape_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resize, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resize_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resize_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resize_as_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resize_as_sparse_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resolve_conj, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_resolve_neg, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_retain_grad, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_roll, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rot90, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_round, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_round_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_row_indices, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rsqrt, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_rsqrt_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter_add, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter_add_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_scatter_reduce_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_select, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_select_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_set_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sgn, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sgn_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_share_memory_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_short, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sigmoid, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sigmoid_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sign, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sign_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_signbit, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sin, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sin_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sinc, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sinc_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sinh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sinh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_size, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_slice_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_slice_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_slogdet, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_smm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sort, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sparse_dim, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sparse_mask, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sparse_resize_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sparse_resize_and_clear_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_split, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_split_with_sizes, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sqrt, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sqrt_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_square, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_square_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_squeeze, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_squeeze_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sspaddmm, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_std, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_stft, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_storage, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_storage_offset, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_storage_type, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sub, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sub_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_subtract, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_subtract_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sum, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_sum_to_size, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_svd, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_swapaxes, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_swapaxes_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_swapdims, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_swapdims_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_t, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_t_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_take, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_take_along_dim, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tan, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tan_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tanh, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tanh_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tensor_split, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tile, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_to, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_to_dense, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_to_mkldnn, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_to_sparse, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tolist, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_topk, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_trace, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_transpose, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_transpose_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_triangular_solve, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tril, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_tril_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_triu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_triu_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_true_divide, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_true_divide_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_trunc, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_trunc_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_type, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_type_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unbind, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unfold, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_uniform_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unique, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unique_consecutive, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unsafe_chunk, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unsafe_split, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unsafe_split_with_sizes, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unsqueeze, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_unsqueeze_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_untyped_storage, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_values, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_var, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_vdot, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_view, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_view_as, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_vsplit, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_where, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_xlogy, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_xlogy_, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_xpu, test/test_overrides.py::TestTorchFunctionOverride::test_Tensor_zero_, test/test_overrides.py::TestTorchFunctionOverride::test_base, test/test_overrides.py::TestTorchFunctionOverride::test_dtype_override, test/test_overrides.py::TestTorchFunctionOverride::test_grad, test/test_overrides.py::TestTorchFunctionOverride::test_has_torch_function_non_sequence, test/test_overrides.py::TestTorchFunctionOverride::test_mean_semantics, test/test_overrides.py::TestTorchFunctionOverride::test_mm_semantics, test/test_overrides.py::TestTorchFunctionOverride::test_pow_rpow, test/test_overrides.py::TestTorchFunctionOverride::test_precedence_semantics, test/test_overrides.py::TestTorchFunctionOverride::test_tensor_subclass_propagation, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_fft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_fft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_fftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_fftshift, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_hfft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_hfft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_hfftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ifft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ifft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ifftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ifftshift, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ihfft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ihfft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_ihfftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_irfft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_irfft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_irfftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_rfft, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_rfft2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__fft_fft_rfftn, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_cholesky, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_cholesky_ex, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_cond, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_cross, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_det, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_diagonal, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_eig, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_eigh, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_eigvals, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_eigvalsh, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_householder_product, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_inv, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_inv_ex, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_ldl_factor, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_ldl_factor_ex, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_ldl_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_lstsq, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_lu, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_lu_factor, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_lu_factor_ex, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_lu_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_matmul, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_matrix_exp, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_matrix_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_matrix_power, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_matrix_rank, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_multi_dot, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_pinv, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_qr, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_slogdet, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_solve_ex, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_solve_triangular, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_svd, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_svdvals, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_tensorinv, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_tensorsolve, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_vander, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_vecdot, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__linalg_linalg_vector_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_avg_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_avg_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_gelu, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_linear, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_log_sigmoid, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_one_hot, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_scaled_dot_product_attention, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_softplus, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__nn_softshrink, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_airy_ai, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_bessel_j0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_bessel_j1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_bessel_y0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_bessel_y1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_chebyshev_polynomial_t, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_chebyshev_polynomial_u, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_chebyshev_polynomial_v, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_chebyshev_polynomial_w, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_digamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_entr, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_erf, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_erfc, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_erfcx, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_erfinv, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_exp2, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_expit, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_expm1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_gammainc, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_gammaincc, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_gammaln, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_hermite_polynomial_h, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_hermite_polynomial_he, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_i0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_i0e, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_i1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_i1e, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_laguerre_polynomial_l, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_legendre_polynomial_p, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_log1p, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_log_ndtr, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_log_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_logit, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_logsumexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_modified_bessel_i0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_modified_bessel_i1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_modified_bessel_k0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_modified_bessel_k1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_multigammaln, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_ndtr, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_ndtri, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_polygamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_psi, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_round, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_scaled_modified_bessel_k0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_scaled_modified_bessel_k1, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_shifted_chebyshev_polynomial_t, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_shifted_chebyshev_polynomial_u, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_shifted_chebyshev_polynomial_v, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_shifted_chebyshev_polynomial_w, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_sinc, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_spherical_bessel_j0, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_xlog1py, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_xlogy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__C__special_special_zeta, test/test_overrides.py::TestTorchFunctionOverride::test_torch__assert_async, test/test_overrides.py::TestTorchFunctionOverride::test_torch__conj_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__functional_assert_async, test/test_overrides.py::TestTorchFunctionOverride::test_torch__fw_primal_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__lobpcg_lobpcg, test/test_overrides.py::TestTorchFunctionOverride::test_torch__lowrank_pca_lowrank, test/test_overrides.py::TestTorchFunctionOverride::test_torch__lowrank_svd_lowrank, test/test_overrides.py::TestTorchFunctionOverride::test_torch__make_dual_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__native_batch_norm_legit, test/test_overrides.py::TestTorchFunctionOverride::test_torch__neg_view_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__reshape_alias_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__rowwise_prune, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sparse_broadcast_to_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_acos, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_asin, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_atan, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_cos, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_cosh, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_sin, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_sinh, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_sqrt, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_tan, test/test_overrides.py::TestTorchFunctionOverride::test_torch__sym_tanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch__values_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch__wrapped_linear_prepack, test/test_overrides.py::TestTorchFunctionOverride::test_torch__wrapped_quantized_linear_prepacked, test/test_overrides.py::TestTorchFunctionOverride::test_torch_abs, test/test_overrides.py::TestTorchFunctionOverride::test_torch_absolute, test/test_overrides.py::TestTorchFunctionOverride::test_torch_acos, test/test_overrides.py::TestTorchFunctionOverride::test_torch_acosh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_adaptive_avg_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_adaptive_max_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_add, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addbmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addcdiv, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addcmul, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addmv, test/test_overrides.py::TestTorchFunctionOverride::test_torch_addr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_adjoint, test/test_overrides.py::TestTorchFunctionOverride::test_torch_affine_grid_generator, test/test_overrides.py::TestTorchFunctionOverride::test_torch_alias_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_all, test/test_overrides.py::TestTorchFunctionOverride::test_torch_allclose, test/test_overrides.py::TestTorchFunctionOverride::test_torch_alpha_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_amax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_amin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_aminmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_angle, test/test_overrides.py::TestTorchFunctionOverride::test_torch_any, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arccos, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arccosh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arcsin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arcsinh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arctan, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arctan2, test/test_overrides.py::TestTorchFunctionOverride::test_torch_arctanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_argmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_argmin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_argsort, test/test_overrides.py::TestTorchFunctionOverride::test_torch_argwhere, test/test_overrides.py::TestTorchFunctionOverride::test_torch_as_strided_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_as_strided_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_asin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_asinh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_atan, test/test_overrides.py::TestTorchFunctionOverride::test_torch_atan2, test/test_overrides.py::TestTorchFunctionOverride::test_torch_atanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_avg_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_baddbmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_backward_elemt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_backward_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_elemt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_gather_stats, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_gather_stats_with_counts, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_stats, test/test_overrides.py::TestTorchFunctionOverride::test_torch_batch_norm_update_stats, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bernoulli, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bilinear, test/test_overrides.py::TestTorchFunctionOverride::test_torch_binary_cross_entropy_with_logits, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bincount, test/test_overrides.py::TestTorchFunctionOverride::test_torch_binomial, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_and, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_left_shift, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_not, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_or, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_right_shift, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bitwise_xor, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_broadcast_to, test/test_overrides.py::TestTorchFunctionOverride::test_torch_bucketize, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cat, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ccol_indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ceil, test/test_overrides.py::TestTorchFunctionOverride::test_torch_celu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_channel_shuffle, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cholesky, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cholesky_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cholesky_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch_choose_qparams_optimized, test/test_overrides.py::TestTorchFunctionOverride::test_torch_chunk, test/test_overrides.py::TestTorchFunctionOverride::test_torch_clamp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_clamp_max, test/test_overrides.py::TestTorchFunctionOverride::test_torch_clamp_min, test/test_overrides.py::TestTorchFunctionOverride::test_torch_clip, test/test_overrides.py::TestTorchFunctionOverride::test_torch_clone, test/test_overrides.py::TestTorchFunctionOverride::test_torch_col_indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_column_stack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_combinations, test/test_overrides.py::TestTorchFunctionOverride::test_torch_complex, test/test_overrides.py::TestTorchFunctionOverride::test_torch_concat, test/test_overrides.py::TestTorchFunctionOverride::test_torch_concatenate, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conj, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conj_physical, test/test_overrides.py::TestTorchFunctionOverride::test_torch_constant_pad_nd, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv_tbc, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv_transpose1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv_transpose2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_conv_transpose3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_convolution, test/test_overrides.py::TestTorchFunctionOverride::test_torch_copysign, test/test_overrides.py::TestTorchFunctionOverride::test_torch_corrcoef, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cos, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cosh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cosine_embedding_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cosine_similarity, test/test_overrides.py::TestTorchFunctionOverride::test_torch_count_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cov, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cross, test/test_overrides.py::TestTorchFunctionOverride::test_torch_crow_indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ctc_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cummax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cummin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cumprod, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cumsum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_cumulative_trapezoid, test/test_overrides.py::TestTorchFunctionOverride::test_torch_deg2rad, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dequantize, test/test_overrides.py::TestTorchFunctionOverride::test_torch_det, test/test_overrides.py::TestTorchFunctionOverride::test_torch_detach, test/test_overrides.py::TestTorchFunctionOverride::test_torch_detach_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diag, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diag_embed, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diagflat, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diagonal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diagonal_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diagonal_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_diff, test/test_overrides.py::TestTorchFunctionOverride::test_torch_digamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dist, test/test_overrides.py::TestTorchFunctionOverride::test_torch_div, test/test_overrides.py::TestTorchFunctionOverride::test_torch_divide, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dot, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dsmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dsplit, test/test_overrides.py::TestTorchFunctionOverride::test_torch_dstack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_embedding, test/test_overrides.py::TestTorchFunctionOverride::test_torch_embedding_bag, test/test_overrides.py::TestTorchFunctionOverride::test_torch_empty_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_eq, test/test_overrides.py::TestTorchFunctionOverride::test_torch_equal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_erf, test/test_overrides.py::TestTorchFunctionOverride::test_torch_erfc, test/test_overrides.py::TestTorchFunctionOverride::test_torch_erfinv, test/test_overrides.py::TestTorchFunctionOverride::test_torch_exp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_exp2, test/test_overrides.py::TestTorchFunctionOverride::test_torch_expand_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_expm1, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fake_quantize_per_channel_affine, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fake_quantize_per_tensor_affine, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_linear_fp16_weight, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_linear_fp16_weight_fp32_activation, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_linear_int8_weight, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_linear_int8_weight_fp32_activation, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_linear_quantize_weight, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_pack_gemm_matrix_fp16, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fbgemm_pack_quantized_matrix, test/test_overrides.py::TestTorchFunctionOverride::test_torch_feature_alpha_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_feature_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fix, test/test_overrides.py::TestTorchFunctionOverride::test_torch_flatten, test/test_overrides.py::TestTorchFunctionOverride::test_torch_flip, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fliplr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_flipud, test/test_overrides.py::TestTorchFunctionOverride::test_torch_float_power, test/test_overrides.py::TestTorchFunctionOverride::test_torch_floor, test/test_overrides.py::TestTorchFunctionOverride::test_torch_floor_divide, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fmin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fmod, test/test_overrides.py::TestTorchFunctionOverride::test_torch_frac, test/test_overrides.py::TestTorchFunctionOverride::test_torch_frexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_frobenius_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_full_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_atleast_1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_atleast_2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_atleast_3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_block_diag, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_broadcast_tensors, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_cartesian_prod, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_cdist, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_chain_matmul, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_einsum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_lu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_meshgrid, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_split, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_stft, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_tensordot, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_unique, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_unique_consecutive, test/test_overrides.py::TestTorchFunctionOverride::test_torch_functional_unravel_index, test/test_overrides.py::TestTorchFunctionOverride::test_torch_fused_moving_avg_obs_fake_quant, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gather, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gcd, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ge, test/test_overrides.py::TestTorchFunctionOverride::test_torch_geqrf, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ger, test/test_overrides.py::TestTorchFunctionOverride::test_torch_get_device, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gradient, test/test_overrides.py::TestTorchFunctionOverride::test_torch_greater, test/test_overrides.py::TestTorchFunctionOverride::test_torch_greater_equal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_grid_sampler, test/test_overrides.py::TestTorchFunctionOverride::test_torch_grid_sampler_2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_grid_sampler_3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_group_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gru, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gru_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_gt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hardshrink, test/test_overrides.py::TestTorchFunctionOverride::test_torch_heaviside, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hinge_embedding_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_histc, test/test_overrides.py::TestTorchFunctionOverride::test_torch_histogram, test/test_overrides.py::TestTorchFunctionOverride::test_torch_histogramdd, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hsmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hsplit, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hstack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_hypot, test/test_overrides.py::TestTorchFunctionOverride::test_torch_i0, test/test_overrides.py::TestTorchFunctionOverride::test_torch_igamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch_igammac, test/test_overrides.py::TestTorchFunctionOverride::test_torch_imag, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_add, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_fill, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_put, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_torch_index_select, test/test_overrides.py::TestTorchFunctionOverride::test_torch_indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_inner, test/test_overrides.py::TestTorchFunctionOverride::test_torch_instance_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_int_repr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_complex, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_conj, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_distributed, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_floating_point, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_inference, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_neg, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_same_size, test/test_overrides.py::TestTorchFunctionOverride::test_torch_is_signed, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isclose, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isfinite, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isinf, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isnan, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isneginf, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isposinf, test/test_overrides.py::TestTorchFunctionOverride::test_torch_isreal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_istft, test/test_overrides.py::TestTorchFunctionOverride::test_torch_kl_div, test/test_overrides.py::TestTorchFunctionOverride::test_torch_kron, test/test_overrides.py::TestTorchFunctionOverride::test_torch_kthvalue, test/test_overrides.py::TestTorchFunctionOverride::test_torch_layer_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lcm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ldexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_le, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lerp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_less, test/test_overrides.py::TestTorchFunctionOverride::test_torch_less_equal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lgamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch_log, test/test_overrides.py::TestTorchFunctionOverride::test_torch_log10, test/test_overrides.py::TestTorchFunctionOverride::test_torch_log1p, test/test_overrides.py::TestTorchFunctionOverride::test_torch_log2, test/test_overrides.py::TestTorchFunctionOverride::test_torch_log_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logaddexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logaddexp2, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logcumsumexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logdet, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logical_and, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logical_not, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logical_or, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logical_xor, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logit, test/test_overrides.py::TestTorchFunctionOverride::test_torch_logsumexp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lstm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lstm_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lu_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch_lu_unpack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_margin_ranking_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_masked_fill, test/test_overrides.py::TestTorchFunctionOverride::test_torch_masked_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_masked_select, test/test_overrides.py::TestTorchFunctionOverride::test_torch_matmul, test/test_overrides.py::TestTorchFunctionOverride::test_torch_matrix_exp, test/test_overrides.py::TestTorchFunctionOverride::test_torch_matrix_power, test/test_overrides.py::TestTorchFunctionOverride::test_torch_max, test/test_overrides.py::TestTorchFunctionOverride::test_torch_max_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_max_pool1d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_max_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_max_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_maximum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_mean, test/test_overrides.py::TestTorchFunctionOverride::test_torch_median, test/test_overrides.py::TestTorchFunctionOverride::test_torch_min, test/test_overrides.py::TestTorchFunctionOverride::test_torch_minimum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_batch_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_convolution, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_convolution_add_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_convolution_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_convolution_transpose, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_depthwise_convolution, test/test_overrides.py::TestTorchFunctionOverride::test_torch_miopen_rnn, test/test_overrides.py::TestTorchFunctionOverride::test_torch_mode, test/test_overrides.py::TestTorchFunctionOverride::test_torch_moveaxis, test/test_overrides.py::TestTorchFunctionOverride::test_torch_movedim, test/test_overrides.py::TestTorchFunctionOverride::test_torch_msort, test/test_overrides.py::TestTorchFunctionOverride::test_torch_mul, test/test_overrides.py::TestTorchFunctionOverride::test_torch_multinomial, test/test_overrides.py::TestTorchFunctionOverride::test_torch_multiply, test/test_overrides.py::TestTorchFunctionOverride::test_torch_mv, test/test_overrides.py::TestTorchFunctionOverride::test_torch_mvlgamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nan_to_num, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nanmean, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nanmedian, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nanquantile, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nansum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_narrow, test/test_overrides.py::TestTorchFunctionOverride::test_torch_narrow_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_batch_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_channel_shuffle, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_group_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_layer_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_native_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ne, test/test_overrides.py::TestTorchFunctionOverride::test_torch_neg, test/test_overrides.py::TestTorchFunctionOverride::test_torch_negative, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nextafter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional__threshold, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_avg_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_avg_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool1d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool2d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_adaptive_max_pool3d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_affine_grid, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_alpha_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_batch_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_binary_cross_entropy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_binary_cross_entropy_with_logits, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_celu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_cosine_embedding_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_cross_entropy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_ctc_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_dropout1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_dropout2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_dropout3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_elu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_embedding, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_embedding_bag, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_feature_alpha_dropout, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_fold, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_fractional_max_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_fractional_max_pool2d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_fractional_max_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_fractional_max_pool3d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_gaussian_nll_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_glu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_grid_sample, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_group_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_gumbel_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_hardtanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_hinge_embedding_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_huber_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_instance_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_interpolate, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_kl_div, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_l1_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_layer_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_leaky_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_local_response_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_log_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_lp_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_lp_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_lp_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_margin_ranking_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool1d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool2d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_pool3d_with_indices, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_unpool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_unpool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_max_unpool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_mish, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_mse_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_multi_head_attention_forward, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_multi_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_multilabel_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_multilabel_soft_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_nll_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_normalize, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_pad, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_poisson_nll_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_relu6, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_rms_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_rrelu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_selu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_silu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_smooth_l1_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_soft_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_softmin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_softsign, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_tanhshrink, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_triplet_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_triplet_margin_with_distance_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_functional_unfold, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_init_constant_, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_init_kaiming_uniform_, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_init_normal_, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nn_init_uniform_, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nonzero, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nonzero_static, test/test_overrides.py::TestTorchFunctionOverride::test_torch_norm_except_dim, test/test_overrides.py::TestTorchFunctionOverride::test_torch_not_equal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_nuclear_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_numel, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ones_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_orgqr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ormqr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_outer, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pairwise_distance, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pdist, test/test_overrides.py::TestTorchFunctionOverride::test_torch_permute, test/test_overrides.py::TestTorchFunctionOverride::test_torch_permute_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pinverse, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pixel_shuffle, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pixel_unshuffle, test/test_overrides.py::TestTorchFunctionOverride::test_torch_poisson, test/test_overrides.py::TestTorchFunctionOverride::test_torch_poisson_nll_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_polar, test/test_overrides.py::TestTorchFunctionOverride::test_torch_polygamma, test/test_overrides.py::TestTorchFunctionOverride::test_torch_positive, test/test_overrides.py::TestTorchFunctionOverride::test_torch_pow, test/test_overrides.py::TestTorchFunctionOverride::test_torch_prelu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_prod, test/test_overrides.py::TestTorchFunctionOverride::test_torch_put, test/test_overrides.py::TestTorchFunctionOverride::test_torch_q_per_channel_axis, test/test_overrides.py::TestTorchFunctionOverride::test_torch_q_per_channel_scales, test/test_overrides.py::TestTorchFunctionOverride::test_torch_q_per_channel_zero_points, test/test_overrides.py::TestTorchFunctionOverride::test_torch_q_scale, test/test_overrides.py::TestTorchFunctionOverride::test_torch_q_zero_point, test/test_overrides.py::TestTorchFunctionOverride::test_torch_qr, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantile, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantize_per_channel, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantize_per_tensor, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantize_per_tensor_dynamic, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_batch_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_gru_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_lstm_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_max_pool1d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_max_pool2d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_max_pool3d, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_rnn_relu_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_quantized_rnn_tanh_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rad2deg, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rand_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_randint_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_randn_like, test/test_overrides.py::TestTorchFunctionOverride::test_torch_ravel, test/test_overrides.py::TestTorchFunctionOverride::test_torch_real, test/test_overrides.py::TestTorchFunctionOverride::test_torch_reciprocal, test/test_overrides.py::TestTorchFunctionOverride::test_torch_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_remainder, test/test_overrides.py::TestTorchFunctionOverride::test_torch_renorm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_repeat_interleave, test/test_overrides.py::TestTorchFunctionOverride::test_torch_reshape, test/test_overrides.py::TestTorchFunctionOverride::test_torch_resolve_conj, test/test_overrides.py::TestTorchFunctionOverride::test_torch_resolve_neg, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rms_norm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rnn_relu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rnn_relu_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rnn_tanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rnn_tanh_cell, test/test_overrides.py::TestTorchFunctionOverride::test_torch_roll, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rot90, test/test_overrides.py::TestTorchFunctionOverride::test_torch_round, test/test_overrides.py::TestTorchFunctionOverride::test_torch_row_indices_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_row_stack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rrelu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rsqrt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_rsub, test/test_overrides.py::TestTorchFunctionOverride::test_torch_saddmm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_scatter_add, test/test_overrides.py::TestTorchFunctionOverride::test_torch_scatter_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_torch_searchsorted, test/test_overrides.py::TestTorchFunctionOverride::test_torch_segment_reduce, test/test_overrides.py::TestTorchFunctionOverride::test_torch_select, test/test_overrides.py::TestTorchFunctionOverride::test_torch_select_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_select_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_selu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sgn, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sigmoid, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sign, test/test_overrides.py::TestTorchFunctionOverride::test_torch_signbit, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sin, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sinc, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sinh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_slice_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_slice_inverse, test/test_overrides.py::TestTorchFunctionOverride::test_torch_slice_scatter, test/test_overrides.py::TestTorchFunctionOverride::test_torch_slogdet, test/test_overrides.py::TestTorchFunctionOverride::test_torch_smm, test/test_overrides.py::TestTorchFunctionOverride::test_torch_softmax, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sort, test/test_overrides.py::TestTorchFunctionOverride::test_torch_split_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_split_with_sizes, test/test_overrides.py::TestTorchFunctionOverride::test_torch_split_with_sizes_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sqrt, test/test_overrides.py::TestTorchFunctionOverride::test_torch_square, test/test_overrides.py::TestTorchFunctionOverride::test_torch_squeeze, test/test_overrides.py::TestTorchFunctionOverride::test_torch_squeeze_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_stack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_std, test/test_overrides.py::TestTorchFunctionOverride::test_torch_std_mean, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sub, test/test_overrides.py::TestTorchFunctionOverride::test_torch_subtract, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_svd, test/test_overrides.py::TestTorchFunctionOverride::test_torch_swapaxes, test/test_overrides.py::TestTorchFunctionOverride::test_torch_swapdims, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_float, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_int, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_ite, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_max, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_min, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_not, test/test_overrides.py::TestTorchFunctionOverride::test_torch_sym_sum, test/test_overrides.py::TestTorchFunctionOverride::test_torch_t, test/test_overrides.py::TestTorchFunctionOverride::test_torch_t_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_take, test/test_overrides.py::TestTorchFunctionOverride::test_torch_take_along_dim, test/test_overrides.py::TestTorchFunctionOverride::test_torch_tan, test/test_overrides.py::TestTorchFunctionOverride::test_torch_tanh, test/test_overrides.py::TestTorchFunctionOverride::test_torch_tensor_split, test/test_overrides.py::TestTorchFunctionOverride::test_torch_threshold, test/test_overrides.py::TestTorchFunctionOverride::test_torch_tile, test/test_overrides.py::TestTorchFunctionOverride::test_torch_topk, test/test_overrides.py::TestTorchFunctionOverride::test_torch_trace, test/test_overrides.py::TestTorchFunctionOverride::test_torch_transpose, test/test_overrides.py::TestTorchFunctionOverride::test_torch_transpose_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_trapezoid, test/test_overrides.py::TestTorchFunctionOverride::test_torch_trapz, test/test_overrides.py::TestTorchFunctionOverride::test_torch_triangular_solve, test/test_overrides.py::TestTorchFunctionOverride::test_torch_tril, test/test_overrides.py::TestTorchFunctionOverride::test_torch_triplet_margin_loss, test/test_overrides.py::TestTorchFunctionOverride::test_torch_triu, test/test_overrides.py::TestTorchFunctionOverride::test_torch_true_divide, test/test_overrides.py::TestTorchFunctionOverride::test_torch_trunc, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unbind, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unbind_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unflatten, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unfold_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unsafe_chunk, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unsafe_split, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unsafe_split_with_sizes, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unsqueeze, test/test_overrides.py::TestTorchFunctionOverride::test_torch_unsqueeze_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_values_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_var, test/test_overrides.py::TestTorchFunctionOverride::test_torch_var_mean, test/test_overrides.py::TestTorchFunctionOverride::test_torch_vdot, test/test_overrides.py::TestTorchFunctionOverride::test_torch_view_as_complex, test/test_overrides.py::TestTorchFunctionOverride::test_torch_view_as_complex_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_view_as_real, test/test_overrides.py::TestTorchFunctionOverride::test_torch_view_as_real_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_view_copy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_vsplit, test/test_overrides.py::TestTorchFunctionOverride::test_torch_vstack, test/test_overrides.py::TestTorchFunctionOverride::test_torch_where, test/test_overrides.py::TestTorchFunctionOverride::test_torch_xlogy, test/test_overrides.py::TestTorchFunctionOverride::test_torch_zeros_like, test/test_overrides.py::TestTorchFunctionOverride::test_user_implementation_raises, test/test_overrides.py::TestEinsumOverride::test_wrapper, test/test_overrides.py::TestGradCheckOverride::test_gradcheck, test/test_overrides.py::TestNamedTuple::test_max, test/test_overrides.py::TestGradNewOnesOverride::test_newones, test/test_overrides.py::TestPickle::test_pickle, test/test_overrides.py::TestBroadcastAllOverride::test_broadcast_all, test/test_overrides.py::TestWrapTorchFunction::test_wrap_torch_function, test/test_overrides.py::TestIndexing::test_getitem, test/test_overrides.py::TestIndexing::test_getitem_subclass, test/test_overrides.py::TestIndexing::test_setitem, test/test_overrides.py::TestIndexing::test_setitem_subclass, test/test_overrides.py::TestIndexing::test_setitem_val, test/test_overrides.py::TestIterator::test_iterator, test/test_overrides.py::TestRNN::test_rnn, test/test_overrides.py::TestDisabledTorchFunction::test_parameter_does_not_prevent_dispatch, test/test_overrides.py::TestResolveName::test_resolve_name, test/test_overrides.py::TestTorchFunctionWarning::test_warn_on_invalid_torch_function_standalone_class, test/test_overrides.py::TestTorchFunctionWarning::test_warn_on_invalid_torch_function_tensor_subclass, test/test_overrides.py::TestDisabledUserWarnings::test_no_implicit_user_warning_for_deprecated_functions, test/test_overrides.py::TestTorchFunctionMode::test_all_same_mode, test/test_overrides.py::TestTorchFunctionMode::test_basic, test/test_overrides.py::TestTorchFunctionMode::test_custom_device_type, test/test_overrides.py::TestTorchFunctionMode::test_device_context_semantics, test/test_overrides.py::TestTorchFunctionMode::test_disable_enable_subclass, test/test_overrides.py::TestTorchFunctionMode::test_disable_enable_torch_function_ctx, test/test_overrides.py::TestTorchFunctionMode::test_disable_subclass_mode, test/test_overrides.py::TestTorchFunctionMode::test_disable_subclass_not_mode, test/test_overrides.py::TestTorchFunctionMode::test_distributions_bernoulli, test/test_overrides.py::TestTorchFunctionMode::test_error_using_class_method_on_mode, test/test_overrides.py::TestTorchFunctionMode::test_factory_override, test/test_overrides.py::TestTorchFunctionMode::test_get_cur_mode, test/test_overrides.py::TestTorchFunctionMode::test_get_mode_stack, test/test_overrides.py::TestTorchFunctionMode::test_getitem_call, test/test_overrides.py::TestTorchFunctionMode::test_mode_notimplemented_loop, test/test_overrides.py::TestTorchFunctionMode::test_modes_handle_first, test/test_overrides.py::TestTorchFunctionMode::test_modes_return_notimplemented, test/test_overrides.py::TestTorchFunctionMode::test_nested_modes_with_python_has_torch_function, test/test_overrides.py::TestTorchFunctionMode::test_nested_same_mode, test/test_overrides.py::TestTorchFunctionMode::test_nn_parse_to, test/test_overrides.py::TestTorchFunctionMode::test_reentrant_mode_idiom, test/test_overrides.py::TestTorchFunctionMode::test_restacking_with_ancestor, test/test_overrides.py::TestTorchFunctionMode::test_subclass_hash, test/test_overrides.py::TestTorchFunctionMode::test_torch_function_all_disabled_api, test/test_overrides.py::TestTorchFunctionMode::test_with_mode, test/test_overrides.py::TestTorchFunctionMode::test_with_mode_created_separately, test/test_overrides.py::TestTorchFunctionMode::test_with_nested_modes 2025-03-17T18:08:04.0220438Z 2025-03-17T18:08:04.0220692Z Running test_cuda_nvml_based_avail 1/1 ... [2025-03-17 18:08:03.923796] 2025-03-17T18:08:04.0221181Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:08:04.0222610Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cuda_nvml_based_avail.py', '--shard-id=1', '--num-shards=1', '-v', '--subprocess', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:08:03.924112] 2025-03-17T18:08:07.1094265Z 2025-03-17T18:08:07.1095239Z test_cuda_nvml_based_avail 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cuda_nvml_based_avail_1.1_8463a41bacee3d33_.log 2025-03-17T18:08:07.1096100Z Running 0 items in this shard: 2025-03-17T18:08:07.1096316Z 2025-03-17T18:08:07.1098946Z Running test_multiprocessing_spawn 1/1 ... [2025-03-17 18:08:07.109626] 2025-03-17T18:08:07.1099469Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:08:07.1102099Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_multiprocessing_spawn.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:08:07.109974] 2025-03-17T18:10:28.8597598Z 2025-03-17T18:10:28.8598643Z test_multiprocessing_spawn 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_multiprocessing_spawn_1.1_200437085c306514_.log 2025-03-17T18:10:28.8610764Z Running 31 items in this shard: test/test_multiprocessing_spawn.py::SpawnTest::test_exception_all, test/test_multiprocessing_spawn.py::SpawnTest::test_exception_raises, test/test_multiprocessing_spawn.py::SpawnTest::test_exception_single, test/test_multiprocessing_spawn.py::SpawnTest::test_first_argument_index, test/test_multiprocessing_spawn.py::SpawnTest::test_signal_raises, test/test_multiprocessing_spawn.py::SpawnTest::test_success, test/test_multiprocessing_spawn.py::SpawnTest::test_success_first_then_exception, test/test_multiprocessing_spawn.py::SpawnTest::test_success_non_blocking, test/test_multiprocessing_spawn.py::SpawnTest::test_terminate_exit_grace_period0, test/test_multiprocessing_spawn.py::SpawnTest::test_terminate_exit_grace_period_5, test/test_multiprocessing_spawn.py::SpawnTest::test_terminate_signal, test/test_multiprocessing_spawn.py::ForkTest::test_exception_all, test/test_multiprocessing_spawn.py::ForkTest::test_exception_single, test/test_multiprocessing_spawn.py::ForkTest::test_first_argument_index, test/test_multiprocessing_spawn.py::ForkTest::test_success, test/test_multiprocessing_spawn.py::ForkTest::test_success_first_then_exception, test/test_multiprocessing_spawn.py::ForkTest::test_success_non_blocking, test/test_multiprocessing_spawn.py::ForkTest::test_terminate_exit_grace_period0, test/test_multiprocessing_spawn.py::ForkTest::test_terminate_exit_grace_period_5, test/test_multiprocessing_spawn.py::ForkTest::test_terminate_signal, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_exception_all, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_exception_single, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_first_argument_index, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_success, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_success_first_then_exception, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_success_non_blocking, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_terminate_exit_grace_period0, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_terminate_exit_grace_period_5, test/test_multiprocessing_spawn.py::ParallelForkServerShouldWorkTest::test_terminate_signal, test/test_multiprocessing_spawn.py::ParallelForkServerPerfTest::test_forkserver_perf, test/test_multiprocessing_spawn.py::ErrorTest::test_errors_pickleable 2025-03-17T18:10:28.8621887Z 2025-03-17T18:10:28.8622085Z Running test_reductions 1/4 ... [2025-03-17 18:10:28.859926] 2025-03-17T18:10:28.8622516Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:10:28.8623696Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_reductions.py', '--shard-id=1', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:10:28.860229] 2025-03-17T18:28:58.4224848Z 2025-03-17T18:28:58.4225787Z test_reductions 1/4 was successful, full logs can be found in artifacts with path test/test-reports/test_reductions_1.4_ae7deb5d9cea05b3_.log 2025-03-17T18:28:58.4663842Z Running 1143 items in this shard: test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_all_any_with_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_aminmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_aminmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_argminmax_axis_with_dim_one_cpu, test/test_reductions.py::TestReductionsCPU::test_argminmax_large_axis_cpu, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_bincount_cpu, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_cumsum_integer_upcast_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsupported_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_all_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cuda_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cuda_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cuda_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_histogram_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_histogram_error_handling_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_histogramdd_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_invalid_0dim_aminmax_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_logsumexp_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_max_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_max_elementwise_cpu, test/test_reductions.py::TestReductionsCPU::test_max_with_inf_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_mean_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_median_real_values_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_min_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_min_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_min_with_inf_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_mode_boolean_cpu, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_prod_gpu_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reduction_split_cpu, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_input_corner_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_output_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_std_correction_vs_numpy_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_std_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_std_vs_numpy_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_std_vs_numpy_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_sum_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_sum_cpu_device_mismatch_cpu, test/test_reductions.py::TestReductionsCPU::test_sum_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_sum_dim_reduction_uint8_overflow_cpu, test/test_reductions.py::TestReductionsCPU::test_sum_out_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_tensor_compare_ops_argmax_argmix_kthvalue_dim_empty_cpu, test/test_reductions.py::TestReductionsCPU::test_var_correction_vs_numpy_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_var_large_input_cpu, test/test_reductions.py::TestReductionsCPU::test_var_mean_correction_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_var_mean_correction_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_var_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_warn_invalid_degrees_of_freedom_cpu_float64 2025-03-17T18:28:58.5092224Z 2025-03-17T18:28:58.5092430Z Running test_reductions 2/4 ... [2025-03-17 18:28:58.424149] 2025-03-17T18:28:58.5092868Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:28:58.5093954Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_reductions.py', '--shard-id=2', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:28:58.424499] 2025-03-17T18:30:41.7170874Z 2025-03-17T18:30:41.7171880Z test_reductions 2/4 was successful, full logs can be found in artifacts with path test/test-reports/test_reductions_2.4_9c93a165c9354117_.log 2025-03-17T18:30:41.7613206Z Running 1142 items in this shard: test/test_reductions.py::TestReductionsCPU::test_all_any_cpu, test/test_reductions.py::TestReductionsCPU::test_all_any_empty_cpu, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsupported_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_all_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_any_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_any_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_errors_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_max_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_max_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_max_with_inf_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_mean_out_is_alias_of_return_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_mean_out_is_alias_of_return_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_mean_out_is_alias_of_return_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_median_corner_cases_cpu, test/test_reductions.py::TestReductionsCPU::test_median_nan_values_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_min_with_inf_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_min_with_inf_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_minmax_illegal_dtype_cpu, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_mode_wrong_dtype_cpu, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_nansum_complex_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_quantile_error_cpu, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_input_corner_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_input_corner_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_input_corner_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_output_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_output_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reduction_vectorize_along_output_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_std_correction_vs_numpy_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_std_correction_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_std_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_std_mean_all_dims_cpu, test/test_reductions.py::TestReductionsCPU::test_std_mean_correction_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_sum_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_tensor_reduce_ops_empty_cpu, test/test_reductions.py::TestReductionsCPU::test_var_correction_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_var_mean_all_dims_cpu, test/test_reductions.py::TestReductionsCPU::test_var_mean_correction_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_var_mean_correction_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_var_stability_cpu, test/test_reductions.py::TestReductionsCPU::test_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_var_vs_numpy_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_var_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_warn_invalid_degrees_of_freedom_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_warn_invalid_degrees_of_freedom_cpu_complex64 2025-03-17T18:30:41.8038743Z 2025-03-17T18:30:41.8038969Z Running test_reductions 3/4 ... [2025-03-17 18:30:41.719125] 2025-03-17T18:30:41.8039404Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:30:41.8040474Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_reductions.py', '--shard-id=3', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:30:41.719442] 2025-03-17T18:32:23.1614778Z 2025-03-17T18:32:23.1615686Z test_reductions 3/4 was successful, full logs can be found in artifacts with path test/test-reports/test_reductions_3.4_5dab9477c3721fdd_.log 2025-03-17T18:32:23.2072999Z Running 1195 items in this shard: test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_all_issue117215_cpu, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_amin_amax_some_dims_cpu, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_aminmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsupported_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsupported_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsupported_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_lastdim_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_lastdim_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_histc_cpu, test/test_reductions.py::TestReductionsCPU::test_histc_lowp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_histc_lowp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_histc_min_max_corner_cases_cuda_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_logcumsumexp_complex_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_logsumexp_integral_promotion_cpu, test/test_reductions.py::TestReductionsCPU::test_max_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_max_with_inf_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_mean_out_is_alias_of_return_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_median_nan_values_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_median_real_values_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_min_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_min_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_min_elementwise_cpu, test/test_reductions.py::TestReductionsCPU::test_min_mixed_devices_cpu, test/test_reductions.py::TestReductionsCPU::test_min_with_inf_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_mode_wrong_device_cpu, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_nanmean_integral_types_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_nansum_complex_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_prod_gpu_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_prod_integer_upcast_cpu, test/test_reductions.py::TestReductionsCPU::test_quantile_backward_cpu, test/test_reductions.py::TestReductionsCPU::test_quantile_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reduce_dtype_cpu, test/test_reductions.py::TestReductionsCPU::test_reduction_empty_any_all_cpu, test/test_reductions.py::TestReductionsCPU::test_reductions_large_half_tensors_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_std_correction_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_std_mean_correction_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_std_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_sum_integer_upcast_cpu, test/test_reductions.py::TestReductionsCPU::test_sum_noncontig_lowp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_sum_vs_numpy_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_var_correction_vs_numpy_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_var_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_var_vs_numpy_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_var_vs_numpy_cpu_float32 2025-03-17T18:32:23.2519389Z 2025-03-17T18:32:23.2519595Z Running test_reductions 4/4 ... [2025-03-17 18:32:23.163306] 2025-03-17T18:32:23.2520031Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:32:23.2521122Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_reductions.py', '--shard-id=4', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:32:23.163622] 2025-03-17T18:33:58.8972663Z 2025-03-17T18:33:58.8974098Z test_reductions 4/4 was successful, full logs can be found in artifacts with path test/test-reports/test_reductions_4.4_ba6861575f79aea2_.log 2025-03-17T18:33:58.9393253Z Running 1085 items in this shard: test/test_reductions.py::TestReductionsCPU::test_accreal_type_cpu, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_all_any_vs_numpy_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_aminmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_argminmax_multiple_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_bucketization_cpu, test/test_reductions.py::TestReductionsCPU::test_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_cumprod_integer_upcast_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_arg_reduction_scalar_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_default_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_nanmean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_keepdim_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_empty_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_duplicate_nansum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_keepdim_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_keepdim_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_multi_unsorted_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_masked_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_ndim_limit_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none__refs_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_keepdim_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_none_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_amin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds__refs_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_all_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_offbounds_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_max_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mean_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_median_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_min_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_mode_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_nanmedian_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_norm_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_std_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_dim_reduction_less_than_64_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single__refs_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_argmax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_count_nonzero_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_keepdim_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_linalg_vector_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_masked_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_std_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_dim_single_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_all_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice__refs_any_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_logsumexp_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_masked_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_std_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_sum_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_empty_slice_var_unbiased_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_mean_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice__refs_std_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_argmin_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_amax_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_norm_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_masked_var_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_prod_cpu, test/test_reductions.py::TestReductionsCPU::test_empty_tensor_nonempty_slice_var_cpu, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_identity_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_identity_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_identity_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_identity_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_identity_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_identity_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_invalid_0dim_aminmax_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_logcumsumexp_complex_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_max_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_max_mixed_devices_cpu, test/test_reductions.py::TestReductionsCPU::test_max_with_inf_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_mean_int_with_optdtype_cpu, test/test_reductions.py::TestReductionsCPU::test_median_real_values_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_median_real_values_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_min_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_min_max_nan_cpu, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_mode_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_mode_large_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_omit_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_nan_policy_propagate_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_nansum_out_dtype_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_nansum_vs_numpy_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_std_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_all_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_logsumexp_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_std_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_expanded_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost__refs_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_logsumexp_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_innermost_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_all_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_linalg_vector_norm_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost__refs_var_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_logsumexp_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_nansum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_outermost_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_logsumexp_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nanmean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_std_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_noncontiguous_transposed_var_unbiased_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_numpy_named_args_cpu, test/test_reductions.py::TestReductionsCPU::test_prod_bool_cpu, test/test_reductions.py::TestReductionsCPU::test_prod_lowp_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_prod_lowp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_quantile_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reductions_large_half_tensors_cpu_complex32, test/test_reductions.py::TestReductionsCPU::test_reductions_large_half_tensors_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_all_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_amin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_argmin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_std_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_masked_var_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nanmean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_duplicate_values_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_amax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_extremal_values_masked_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_1D_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_2D__refs_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing__refs_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_count_nonzero_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_masked_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_large_input_64bit_indexing_nansum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_mean_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input__refs_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_all_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_any_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_std_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nanmean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_nansum_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_std_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_scalar_input_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_all_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_count_nonzero_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_all_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_amin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_any_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_argmin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_std_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_sum_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_masked_var_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nanmean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_nansum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_std_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_ref_small_input_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_std_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_sum_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_reference_masked_masked_var_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_repeated_dim_cpu, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_all_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_amin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_count_nonzero_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_linalg_vector_norm_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_mean_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_sum_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype__refs_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_all_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_bool, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_amin_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_any_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmax_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_argmin_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_count_nonzero_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_linalg_vector_norm_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amax_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_amin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmax_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_argmin_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_logsumexp_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_norm_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_int8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_prod_cpu_uint8, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_std_cpu_int64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_masked_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_mean_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_int16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_nansum_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_prod_cpu_int32, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_bfloat16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_std_unbiased_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_sum_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_complex128, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_complex64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_result_dtype_var_unbiased_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_std_mean_correction_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_std_mean_correction_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_std_mean_some_dims_cpu, test/test_reductions.py::TestReductionsCPU::test_std_vs_numpy_cpu_float32, test/test_reductions.py::TestReductionsCPU::test_sum_noncontig_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_sum_noncontig_lowp_cpu_float16, test/test_reductions.py::TestReductionsCPU::test_sum_parallel_cpu, test/test_reductions.py::TestReductionsCPU::test_tensor_compare_ops_empty_cpu, test/test_reductions.py::TestReductionsCPU::test_var_correction_vs_numpy_cpu_float64, test/test_reductions.py::TestReductionsCPU::test_var_cpu, test/test_reductions.py::TestReductionsCPU::test_var_mean_some_dims_cpu, test/test_reductions.py::TestReductionsCPU::test_var_stability2_cpu, test/test_reductions.py::TestReductionsCPU::test_warn_invalid_degrees_of_freedom_cpu_float32 2025-03-17T18:33:58.9811252Z 2025-03-17T18:33:58.9811575Z Running distributions/test_distributions 1/2 ... [2025-03-17 18:33:58.899246] 2025-03-17T18:33:58.9812108Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:33:58.9813537Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'distributions/test_distributions.py', '--shard-id=1', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:33:58.899605] 2025-03-17T18:39:09.2516393Z 2025-03-17T18:39:09.2517785Z distributions/test_distributions 1/2 was successful, full logs can be found in artifacts with path test/test-reports/distributions.test_distributions_1.2_edf0b4eda7cfebe2_.log 2025-03-17T18:39:09.2571807Z Running 130 items in this shard: test/distributions/test_distributions.py::TestDistributions::test_argmax_relaxed_categorical, test/distributions/test_distributions.py::TestDistributions::test_bernoulli, test/distributions/test_distributions.py::TestDistributions::test_bernoulli_3d, test/distributions/test_distributions.py::TestDistributions::test_bernoulli_enumerate_support, test/distributions/test_distributions.py::TestDistributions::test_beta_log_prob, test/distributions/test_distributions.py::TestDistributions::test_beta_underflow, test/distributions/test_distributions.py::TestDistributions::test_binomial, test/distributions/test_distributions.py::TestDistributions::test_binomial_half, test/distributions/test_distributions.py::TestDistributions::test_binomial_log_prob_vectorized_count, test/distributions/test_distributions.py::TestDistributions::test_binomial_vectorized_count, test/distributions/test_distributions.py::TestDistributions::test_categorical_1d, test/distributions/test_distributions.py::TestDistributions::test_categorical_2d, test/distributions/test_distributions.py::TestDistributions::test_categorical_enumerate_support, test/distributions/test_distributions.py::TestDistributions::test_cauchy, test/distributions/test_distributions.py::TestDistributions::test_cdf_icdf_inverse, test/distributions/test_distributions.py::TestDistributions::test_cdf_log_prob, test/distributions/test_distributions.py::TestDistributions::test_chi2_shape, test/distributions/test_distributions.py::TestDistributions::test_continuous_bernoulli, test/distributions/test_distributions.py::TestDistributions::test_continuous_bernoulli_3d, test/distributions/test_distributions.py::TestDistributions::test_distribution_expand, test/distributions/test_distributions.py::TestDistributions::test_enumerate_support_type, test/distributions/test_distributions.py::TestDistributions::test_exponential, test/distributions/test_distributions.py::TestDistributions::test_exponential_sample, test/distributions/test_distributions.py::TestDistributions::test_fishersnedecor, test/distributions/test_distributions.py::TestDistributions::test_gamma_sample, test/distributions/test_distributions.py::TestDistributions::test_gamma_shape, test/distributions/test_distributions.py::TestDistributions::test_geometric, test/distributions/test_distributions.py::TestDistributions::test_geometric_log_prob_and_entropy, test/distributions/test_distributions.py::TestDistributions::test_geometric_sample, test/distributions/test_distributions.py::TestDistributions::test_gumbel_sample, test/distributions/test_distributions.py::TestDistributions::test_halfcauchy, test/distributions/test_distributions.py::TestDistributions::test_halfnormal, test/distributions/test_distributions.py::TestDistributions::test_halfnormal_logprob, test/distributions/test_distributions.py::TestDistributions::test_has_examples, test/distributions/test_distributions.py::TestDistributions::test_independent_expand, test/distributions/test_distributions.py::TestDistributions::test_independent_shape, test/distributions/test_distributions.py::TestDistributions::test_invalid_parameter_broadcasting, test/distributions/test_distributions.py::TestDistributions::test_inversegamma, test/distributions/test_distributions.py::TestDistributions::test_inversegamma_sample, test/distributions/test_distributions.py::TestDistributions::test_kumaraswamy_mean_variance, test/distributions/test_distributions.py::TestDistributions::test_kumaraswamy_shape, test/distributions/test_distributions.py::TestDistributions::test_lkj_cholesky_log_prob, test/distributions/test_distributions.py::TestDistributions::test_logisticnormal_logprob, test/distributions/test_distributions.py::TestDistributions::test_logisticnormal_sample, test/distributions/test_distributions.py::TestDistributions::test_lognormal_logprob, test/distributions/test_distributions.py::TestDistributions::test_lognormal_sample, test/distributions/test_distributions.py::TestDistributions::test_lowrank_multivariate_normal_log_prob, test/distributions/test_distributions.py::TestDistributions::test_lowrank_multivariate_normal_moments, test/distributions/test_distributions.py::TestDistributions::test_lowrank_multivariate_normal_shape, test/distributions/test_distributions.py::TestDistributions::test_mixture_same_family_log_prob, test/distributions/test_distributions.py::TestDistributions::test_multinomial_1d_log_prob_and_entropy, test/distributions/test_distributions.py::TestDistributions::test_multinomial_2d, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_log_prob, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_moments, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_properties, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_shape, test/distributions/test_distributions.py::TestDistributions::test_negative_binomial, test/distributions/test_distributions.py::TestDistributions::test_normal, test/distributions/test_distributions.py::TestDistributions::test_normal_sample, test/distributions/test_distributions.py::TestDistributions::test_one_hot_categorical_2d, test/distributions/test_distributions.py::TestDistributions::test_pareto, test/distributions/test_distributions.py::TestDistributions::test_pareto_sample, test/distributions/test_distributions.py::TestDistributions::test_poisson_forward_ad, test/distributions/test_distributions.py::TestDistributions::test_poisson_log_prob, test/distributions/test_distributions.py::TestDistributions::test_repr, test/distributions/test_distributions.py::TestDistributions::test_rounded_relaxed_bernoulli, test/distributions/test_distributions.py::TestDistributions::test_studentT, test/distributions/test_distributions.py::TestDistributions::test_studentT_log_prob, test/distributions/test_distributions.py::TestDistributions::test_studentT_sample, test/distributions/test_distributions.py::TestDistributions::test_support_attributes, test/distributions/test_distributions.py::TestDistributions::test_vonmises_logprob, test/distributions/test_distributions.py::TestDistributions::test_vonmises_sample, test/distributions/test_distributions.py::TestDistributions::test_wishart_log_prob, test/distributions/test_distributions.py::TestDistributions::test_wishart_moments, test/distributions/test_distributions.py::TestDistributions::test_wishart_properties, test/distributions/test_distributions.py::TestDistributions::test_wishart_sample, test/distributions/test_distributions.py::TestDistributions::test_wishart_stable_with_precision_matrix, test/distributions/test_distributions.py::TestDistributions::test_zero_excluded_binomial, test/distributions/test_distributions.py::TestRsample::test_beta_wrt_alpha, test/distributions/test_distributions.py::TestRsample::test_beta_wrt_beta, test/distributions/test_distributions.py::TestRsample::test_dirichlet_on_diagonal, test/distributions/test_distributions.py::TestRsample::test_dirichlet_tangent_field, test/distributions/test_distributions.py::TestDistributionShapes::test_bernoulli_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_beta_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_binomial_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_binomial_shape_vectorized_n, test/distributions/test_distributions.py::TestDistributionShapes::test_categorical_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_cauchy_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_cauchy_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_chi2_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_exponential_shape_scalar_param, test/distributions/test_distributions.py::TestDistributionShapes::test_exponential_shape_tensor_param, test/distributions/test_distributions.py::TestDistributionShapes::test_gamma_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_gamma_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_gumbel_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_laplace_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_mixture_same_family_mean_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_multinomial_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_normal_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_one_hot_categorical_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_studentT_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_studentT_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_uniform_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_uniform_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_vonmises_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_wishart_shape_scalar_params, test/distributions/test_distributions.py::TestKL::test_entropy_monte_carlo, test/distributions/test_distributions.py::TestKL::test_kl_exponential_family, test/distributions/test_distributions.py::TestKL::test_kl_lowrank_multivariate_normal, test/distributions/test_distributions.py::TestKL::test_kl_lowrank_multivariate_normal_batched, test/distributions/test_distributions.py::TestKL::test_kl_multivariate_normal_batched, test/distributions/test_distributions.py::TestKL::test_kl_multivariate_normal_batched_broadcasted, test/distributions/test_distributions.py::TestKL::test_kl_transformed, test/distributions/test_distributions.py::TestConstraints::test_support_constraints, test/distributions/test_distributions.py::TestNumericalStability::test_bernoulli_gradient, test/distributions/test_distributions.py::TestNumericalStability::test_categorical_log_prob_with_logits, test/distributions/test_distributions.py::TestNumericalStability::test_continuous_bernoulli_gradient, test/distributions/test_distributions.py::TestNumericalStability::test_continuous_bernoulli_with_logits_overflow, test/distributions/test_distributions.py::TestNumericalStability::test_multinomial_log_prob, test/distributions/test_distributions.py::TestLazyLogitsInitialization::test_lazy_logits_initialization, test/distributions/test_distributions.py::TestAgainstScipy::test_icdf, test/distributions/test_distributions.py::TestAgainstScipy::test_mean, test/distributions/test_distributions.py::TestFunctors::test_cat_event_dim, test/distributions/test_distributions.py::TestFunctors::test_stack_transform, test/distributions/test_distributions.py::TestValidation::test_invalid_log_probs_arg, test/distributions/test_distributions.py::TestValidation::test_valid, test/distributions/test_distributions.py::TestValidation::test_warning_unimplemented_constraints, test/distributions/test_distributions.py::TestJit::test_cdf, test/distributions/test_distributions.py::TestJit::test_mean, test/distributions/test_distributions.py::TestJit::test_sample 2025-03-17T18:39:09.2624361Z 2025-03-17T18:39:09.2624647Z Running distributions/test_distributions 2/2 ... [2025-03-17 18:39:09.252014] 2025-03-17T18:39:09.2625165Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:39:09.2626435Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'distributions/test_distributions.py', '--shard-id=2', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:39:09.252336] 2025-03-17T18:45:10.2171760Z 2025-03-17T18:45:10.2173042Z distributions/test_distributions 2/2 was successful, full logs can be found in artifacts with path test/test-reports/distributions.test_distributions_2.2_70e8e91b504d7941_.log 2025-03-17T18:45:10.2212948Z Running 96 items in this shard: test/distributions/test_distributions.py::TestDistributions::test_beta_sample, test/distributions/test_distributions.py::TestDistributions::test_beta_shape, test/distributions/test_distributions.py::TestDistributions::test_beta_underflow_gpu, test/distributions/test_distributions.py::TestDistributions::test_binomial_bfloat16, test/distributions/test_distributions.py::TestDistributions::test_binomial_enumerate_support, test/distributions/test_distributions.py::TestDistributions::test_binomial_extreme_vals, test/distributions/test_distributions.py::TestDistributions::test_binomial_log_prob_and_entropy, test/distributions/test_distributions.py::TestDistributions::test_binomial_sample, test/distributions/test_distributions.py::TestDistributions::test_binomial_stable, test/distributions/test_distributions.py::TestDistributions::test_chi2_sample, test/distributions/test_distributions.py::TestDistributions::test_dirichlet_log_prob, test/distributions/test_distributions.py::TestDistributions::test_dirichlet_log_prob_zero, test/distributions/test_distributions.py::TestDistributions::test_dirichlet_mode, test/distributions/test_distributions.py::TestDistributions::test_dirichlet_sample, test/distributions/test_distributions.py::TestDistributions::test_dirichlet_shape, test/distributions/test_distributions.py::TestDistributions::test_distribution_subclass_expand, test/distributions/test_distributions.py::TestDistributions::test_fishersnedecor_sample, test/distributions/test_distributions.py::TestDistributions::test_gamma_gpu_sample, test/distributions/test_distributions.py::TestDistributions::test_gamma_gpu_shape, test/distributions/test_distributions.py::TestDistributions::test_gamma_log_prob_at_boundary, test/distributions/test_distributions.py::TestDistributions::test_gumbel, test/distributions/test_distributions.py::TestDistributions::test_halfnormal_sample, test/distributions/test_distributions.py::TestDistributions::test_laplace, test/distributions/test_distributions.py::TestDistributions::test_laplace_sample, test/distributions/test_distributions.py::TestDistributions::test_lazy_property_grad, test/distributions/test_distributions.py::TestDistributions::test_logisticnormal, test/distributions/test_distributions.py::TestDistributions::test_lognormal, test/distributions/test_distributions.py::TestDistributions::test_lowrank_multivariate_normal_properties, test/distributions/test_distributions.py::TestDistributions::test_lowrank_multivariate_normal_sample, test/distributions/test_distributions.py::TestDistributions::test_mixture_same_family_sample, test/distributions/test_distributions.py::TestDistributions::test_mixture_same_family_shape, test/distributions/test_distributions.py::TestDistributions::test_mode, test/distributions/test_distributions.py::TestDistributions::test_multinomial_1d, test/distributions/test_distributions.py::TestDistributions::test_multinomial_sequential_draw, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_sample, test/distributions/test_distributions.py::TestDistributions::test_multivariate_normal_stable_with_precision_matrix, test/distributions/test_distributions.py::TestDistributions::test_negative_binomial_log_prob, test/distributions/test_distributions.py::TestDistributions::test_negative_binomial_log_prob_vectorized_count, test/distributions/test_distributions.py::TestDistributions::test_one_hot_categorical_1d, test/distributions/test_distributions.py::TestDistributions::test_one_hot_categorical_enumerate_support, test/distributions/test_distributions.py::TestDistributions::test_poisson_gpu_sample, test/distributions/test_distributions.py::TestDistributions::test_poisson_sample, test/distributions/test_distributions.py::TestDistributions::test_poisson_shape, test/distributions/test_distributions.py::TestDistributions::test_relaxed_bernoulli, test/distributions/test_distributions.py::TestDistributions::test_relaxed_one_hot_categorical_1d, test/distributions/test_distributions.py::TestDistributions::test_relaxed_one_hot_categorical_2d, test/distributions/test_distributions.py::TestDistributions::test_rsample_requires_grad, test/distributions/test_distributions.py::TestDistributions::test_sample_detached, test/distributions/test_distributions.py::TestDistributions::test_uniform, test/distributions/test_distributions.py::TestDistributions::test_valid_parameter_broadcasting, test/distributions/test_distributions.py::TestDistributions::test_wishart_shape, test/distributions/test_distributions.py::TestRsample::test_chi2, test/distributions/test_distributions.py::TestRsample::test_dirichlet_multivariate, test/distributions/test_distributions.py::TestRsample::test_gamma, test/distributions/test_distributions.py::TestDistributionShapes::test_bernoulli_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_beta_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_chi2_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_continuous_bernoulli_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_continuous_bernoulli_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_dirichlet_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_entropy_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_geometric_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_geometric_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_halfcauchy_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_halfcauchy_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_kumaraswamy_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_laplace_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_mixture_same_family_shape, test/distributions/test_distributions.py::TestDistributionShapes::test_normal_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_pareto_shape_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_vonmises_shape_tensor_params, test/distributions/test_distributions.py::TestDistributionShapes::test_weibull_scale_scalar_params, test/distributions/test_distributions.py::TestDistributionShapes::test_wishart_shape_tensor_params, test/distributions/test_distributions.py::TestKL::test_entropy_exponential_family, test/distributions/test_distributions.py::TestKL::test_kl_edgecases, test/distributions/test_distributions.py::TestKL::test_kl_infinite, test/distributions/test_distributions.py::TestKL::test_kl_monte_carlo, test/distributions/test_distributions.py::TestKL::test_kl_multivariate_normal, test/distributions/test_distributions.py::TestKL::test_kl_shape, test/distributions/test_distributions.py::TestConstraints::test_params_constraints, test/distributions/test_distributions.py::TestNumericalStability::test_bernoulli_with_logits_overflow, test/distributions/test_distributions.py::TestNumericalStability::test_bernoulli_with_logits_underflow, test/distributions/test_distributions.py::TestNumericalStability::test_categorical_log_prob, test/distributions/test_distributions.py::TestNumericalStability::test_continuous_bernoulli_with_logits_underflow, test/distributions/test_distributions.py::TestNumericalStability::test_multinomial_log_prob_with_logits, test/distributions/test_distributions.py::TestLazyLogitsInitialization::test_lazy_probs_initialization, test/distributions/test_distributions.py::TestAgainstScipy::test_cdf, test/distributions/test_distributions.py::TestAgainstScipy::test_variance_stddev, test/distributions/test_distributions.py::TestFunctors::test_cat_transform, test/distributions/test_distributions.py::TestFunctors::test_cat_transform_non_uniform, test/distributions/test_distributions.py::TestValidation::test_invalid, test/distributions/test_distributions.py::TestJit::test_entropy, test/distributions/test_distributions.py::TestJit::test_enumerate_support, test/distributions/test_distributions.py::TestJit::test_log_prob, test/distributions/test_distributions.py::TestJit::test_rsample, test/distributions/test_distributions.py::TestJit::test_variance 2025-03-17T18:45:10.2251836Z 2025-03-17T18:45:10.2252002Z Running doctests 1/1 ... [2025-03-17 18:45:10.217517] 2025-03-17T18:45:10.3065456Z Start doctest_module('/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch') 2025-03-17T18:45:10.3065988Z Listing tests 2025-03-17T18:45:10.5031793Z msg = Cannot scrape callname=Tensor.dim_order in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py line=1507. 2025-03-17T18:45:10.5032747Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5033545Z 2025-03-17T18:45:10.5033734Z dim_order(ambiguity_check=False) -> tuple 2025-03-17T18:45:10.5033993Z 2025-03-17T18:45:10.5034362Z Returns the uniquely determined tuple of int describing the dim order or 2025-03-17T18:45:10.5035065Z physical layout of :attr:`self`. 2025-03-17T18:45:10.5035411Z 2025-03-17T18:45:10.5035860Z The dim order represents how dimensions are laid out in memory of dense tensors, 2025-03-17T18:45:10.5037041Z starting from the outermost to the innermost dimension. 2025-03-17T18:45:10.5037553Z 2025-03-17T18:45:10.5037873Z Note that the dim order may not always be uniquely determined. 2025-03-17T18:45:10.5039118Z If `ambiguity_check` is True, this function raises a RuntimeError when the dim order cannot be uniquely determined; 2025-03-17T18:45:10.5040825Z If `ambiguity_check` is a list of memory formats, this function raises a RuntimeError when tensor can not be interpreted 2025-03-17T18:45:10.5041905Z into exactly one of the given memory formats, or it cannot be uniquely determined. 2025-03-17T18:45:10.5042635Z If `ambiguity_check` is False, it will return one of legal dim order(s) without checking its uniqueness. 2025-03-17T18:45:10.5043220Z Otherwise, it will raise TypeError. 2025-03-17T18:45:10.5043452Z 2025-03-17T18:45:10.5043544Z Args: 2025-03-17T18:45:10.5044032Z ambiguity_check (bool or List[torch.memory_format]): The check method for ambiguity of dim order. 2025-03-17T18:45:10.5044494Z 2025-03-17T18:45:10.5044619Z Examples:: 2025-03-17T18:45:10.5044764Z 2025-03-17T18:45:10.5044883Z >>> torch.empty((2, 3, 5, 7)).dim_order() 2025-03-17T18:45:10.5045224Z (0, 1, 2, 3) 2025-03-17T18:45:10.5045539Z >>> torch.empty((2, 3, 5, 7)).transpose(1, 2).dim_order() 2025-03-17T18:45:10.5045916Z (0, 2, 1, 3) 2025-03-17T18:45:10.5046284Z >>> torch.empty((2, 3, 5, 7), memory_format=torch.channels_last).dim_order() 2025-03-17T18:45:10.5046717Z (0, 2, 3, 1) 2025-03-17T18:45:10.5046984Z >>> torch.empty((1, 2, 3, 4)).dim_order() 2025-03-17T18:45:10.5047315Z (0, 1, 2, 3) 2025-03-17T18:45:10.5047550Z >>> try: 2025-03-17T18:45:10.5047857Z ... torch.empty((1, 2, 3, 4)).dim_order(ambiguity_check=True) 2025-03-17T18:45:10.5048282Z ... except RuntimeError as e: 2025-03-17T18:45:10.5048596Z ... print(e) 2025-03-17T18:45:10.5049083Z The tensor does not have unique dim order, or cannot map to exact one of the given memory formats. 2025-03-17T18:45:10.5049655Z >>> torch.empty((1, 2, 3, 4)).dim_order( 2025-03-17T18:45:10.5050108Z ... ambiguity_check=[torch.contiguous_format, torch.channels_last] 2025-03-17T18:45:10.5050698Z ... ) # It can be mapped to contiguous format 2025-03-17T18:45:10.5051050Z (0, 1, 2, 3) 2025-03-17T18:45:10.5051293Z >>> try: 2025-03-17T18:45:10.5051629Z ... torch.empty((1, 2, 3, 4)).dim_order(ambiguity_check="ILLEGAL") 2025-03-17T18:45:10.5052063Z ... except TypeError as e: 2025-03-17T18:45:10.5052367Z ... print(e) 2025-03-17T18:45:10.5052778Z The ambiguity_check argument must be a bool or a list of memory formats. 2025-03-17T18:45:10.5053138Z 2025-03-17T18:45:10.5053251Z .. warning:: 2025-03-17T18:45:10.5053600Z The dim_order tensor API is experimental and subject to change. 2025-03-17T18:45:10.5053920Z 2025-03-17T18:45:10.5054191Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5054568Z 2025-03-17T18:45:10.5563080Z msg = Cannot scrape callname=meshgrid in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py line=446. 2025-03-17T18:45:10.5564223Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5564892Z Creates grids of coordinates specified by the 1D inputs in `attr`:tensors. 2025-03-17T18:45:10.5565256Z 2025-03-17T18:45:10.5565455Z This is helpful when you want to visualize data over some 2025-03-17T18:45:10.5565920Z range of inputs. See below for a plotting example. 2025-03-17T18:45:10.5566194Z 2025-03-17T18:45:10.5566387Z Given :math:`N` 1D tensors :math:`T_0 \ldots T_{N-1}` as 2025-03-17T18:45:10.5566876Z inputs with corresponding sizes :math:`S_0 \ldots S_{N-1}`, 2025-03-17T18:45:10.5567390Z this creates :math:`N` N-dimensional tensors :math:`G_0 \ldots 2025-03-17T18:45:10.5567872Z G_{N-1}`, each with shape :math:`(S_0, ..., S_{N-1})` where 2025-03-17T18:45:10.5568348Z the output :math:`G_i` is constructed by expanding :math:`T_i` 2025-03-17T18:45:10.5568774Z to the result shape. 2025-03-17T18:45:10.5568961Z 2025-03-17T18:45:10.5569093Z .. note:: 2025-03-17T18:45:10.5569420Z 0D inputs are treated equivalently to 1D inputs of a 2025-03-17T18:45:10.5569814Z single element. 2025-03-17T18:45:10.5569993Z 2025-03-17T18:45:10.5570188Z .. warning:: 2025-03-17T18:45:10.5570529Z `torch.meshgrid(*tensors)` currently has the same behavior 2025-03-17T18:45:10.5571202Z as calling `numpy.meshgrid(*arrays, indexing='ij')`. 2025-03-17T18:45:10.5571503Z 2025-03-17T18:45:10.5571663Z In the future `torch.meshgrid` will transition to 2025-03-17T18:45:10.5572066Z `indexing='xy'` as the default. 2025-03-17T18:45:10.5572309Z 2025-03-17T18:45:10.5572511Z https://github.com/pytorch/pytorch/issues/50276 tracks 2025-03-17T18:45:10.5573010Z this issue with the goal of migrating to NumPy's behavior. 2025-03-17T18:45:10.5573324Z 2025-03-17T18:45:10.5573424Z .. seealso:: 2025-03-17T18:45:10.5573591Z 2025-03-17T18:45:10.5573769Z :func:`torch.cartesian_prod` has the same effect but it 2025-03-17T18:45:10.5574243Z collects the data in a tensor of vectors. 2025-03-17T18:45:10.5574505Z 2025-03-17T18:45:10.5574597Z Args: 2025-03-17T18:45:10.5575033Z tensors (list of Tensor): list of scalars or 1 dimensional tensors. Scalars will be 2025-03-17T18:45:10.5575599Z treated as tensors of size :math:`(1,)` automatically 2025-03-17T18:45:10.5575901Z 2025-03-17T18:45:10.5576085Z indexing: (str, optional): the indexing mode, either "xy" 2025-03-17T18:45:10.5576565Z or "ij", defaults to "ij". See warning for future changes. 2025-03-17T18:45:10.5576873Z 2025-03-17T18:45:10.5577036Z If "xy" is selected, the first dimension corresponds 2025-03-17T18:45:10.5577492Z to the cardinality of the second input and the second 2025-03-17T18:45:10.5577965Z dimension corresponds to the cardinality of the first 2025-03-17T18:45:10.5578363Z input. 2025-03-17T18:45:10.5578635Z 2025-03-17T18:45:10.5578792Z If "ij" is selected, the dimensions are in the same 2025-03-17T18:45:10.5579211Z order as the cardinality of the inputs. 2025-03-17T18:45:10.5579476Z 2025-03-17T18:45:10.5579571Z Returns: 2025-03-17T18:45:10.5579900Z seq (sequence of Tensors): If the input has :math:`N` 2025-03-17T18:45:10.5580598Z tensors of size :math:`S_0 \ldots S_{N-1}``, then the 2025-03-17T18:45:10.5581325Z output will also have :math:`N` tensors, where each tensor 2025-03-17T18:45:10.5581985Z is of shape :math:`(S_0, ..., S_{N-1})`. 2025-03-17T18:45:10.5582292Z 2025-03-17T18:45:10.5582586Z Example:: 2025-03-17T18:45:10.5582736Z 2025-03-17T18:45:10.5582892Z >>> x = torch.tensor([1, 2, 3]) 2025-03-17T18:45:10.5583422Z >>> y = torch.tensor([4, 5, 6]) 2025-03-17T18:45:10.5583735Z 2025-03-17T18:45:10.5584104Z Observe the element-wise pairings across the grid, (1, 4), 2025-03-17T18:45:10.5584822Z (1, 5), ..., (3, 6). This is the same thing as the 2025-03-17T18:45:10.5585262Z cartesian product. 2025-03-17T18:45:10.5585935Z >>> grid_x, grid_y = torch.meshgrid(x, y, indexing='ij') 2025-03-17T18:45:10.5586418Z >>> grid_x 2025-03-17T18:45:10.5586761Z tensor([[1, 1, 1], 2025-03-17T18:45:10.5587059Z [2, 2, 2], 2025-03-17T18:45:10.5587329Z [3, 3, 3]]) 2025-03-17T18:45:10.5587610Z >>> grid_y 2025-03-17T18:45:10.5587876Z tensor([[4, 5, 6], 2025-03-17T18:45:10.5588162Z [4, 5, 6], 2025-03-17T18:45:10.5588441Z [4, 5, 6]]) 2025-03-17T18:45:10.5588633Z 2025-03-17T18:45:10.5588805Z This correspondence can be seen when these grids are 2025-03-17T18:45:10.5589209Z stacked properly. 2025-03-17T18:45:10.5589612Z >>> torch.equal(torch.cat(tuple(torch.dstack([grid_x, grid_y]))), 2025-03-17T18:45:10.5590084Z ... torch.cartesian_prod(x, y)) 2025-03-17T18:45:10.5590431Z True 2025-03-17T18:45:10.5590574Z 2025-03-17T18:45:10.5590769Z `torch.meshgrid` is commonly used to produce a grid for 2025-03-17T18:45:10.5591175Z plotting. 2025-03-17T18:45:10.5591485Z >>> # xdoctest: +REQUIRES(module:matplotlib) 2025-03-17T18:45:10.5591993Z >>> # xdoctest: +REQUIRES(env:DOCTEST_SHOW) 2025-03-17T18:45:10.5592384Z >>> import matplotlib.pyplot as plt 2025-03-17T18:45:10.5592769Z >>> xs = torch.linspace(-5, 5, steps=100) 2025-03-17T18:45:10.5593145Z >>> ys = torch.linspace(-5, 5, steps=100) 2025-03-17T18:45:10.5593540Z >>> x, y = torch.meshgrid(xs, ys, indexing='xy') 2025-03-17T18:45:10.5593953Z >>> z = torch.sin(torch.sqrt(x * x + y * y)) 2025-03-17T18:45:10.5594322Z >>> ax = plt.axes(projection='3d') 2025-03-17T18:45:10.5594724Z >>> ax.plot_surface(x.numpy(), y.numpy(), z.numpy()) 2025-03-17T18:45:10.5595118Z >>> plt.show() 2025-03-17T18:45:10.5595406Z 2025-03-17T18:45:10.5595550Z .. image:: ../_static/img/meshgrid.png 2025-03-17T18:45:10.5595895Z :width: 512 2025-03-17T18:45:10.5596071Z 2025-03-17T18:45:10.5596164Z 2025-03-17T18:45:10.5596566Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5596963Z 2025-03-17T18:45:10.5597539Z msg = Cannot scrape callname=_unique_impl in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py line=842. 2025-03-17T18:45:10.5598436Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5599228Z unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None) -> tuple[Tensor, Tensor, Tensor] 2025-03-17T18:45:10.5599738Z 2025-03-17T18:45:10.5599915Z Returns the unique elements of the input tensor. 2025-03-17T18:45:10.5600186Z 2025-03-17T18:45:10.5600574Z .. note:: This function is different from :func:`torch.unique_consecutive` in the sense that 2025-03-17T18:45:10.5601217Z this function also eliminates non-consecutive duplicate values. 2025-03-17T18:45:10.5601553Z 2025-03-17T18:45:10.5601808Z .. note:: Currently in the CUDA implementation and the CPU implementation, 2025-03-17T18:45:10.5602474Z `torch.unique` always sort the tensor at the beginning regardless of the `sort` argument. 2025-03-17T18:45:10.5603196Z Sorting could be slow, so if your input tensor is already sorted, it is recommended to use 2025-03-17T18:45:10.5603800Z :func:`torch.unique_consecutive` which avoids the sorting. 2025-03-17T18:45:10.5604102Z 2025-03-17T18:45:10.5604209Z Args: 2025-03-17T18:45:10.5604463Z input (Tensor): the input tensor 2025-03-17T18:45:10.5604917Z sorted (bool): Whether to sort the unique elements in ascending order 2025-03-17T18:45:10.5605383Z before returning as output. 2025-03-17T18:45:10.5605830Z return_inverse (bool): Whether to also return the indices for where 2025-03-17T18:45:10.5606395Z elements in the original input ended up in the returned unique list. 2025-03-17T18:45:10.5606961Z return_counts (bool): Whether to also return the counts for each unique 2025-03-17T18:45:10.5607410Z element. 2025-03-17T18:45:10.5607795Z dim (int, optional): the dimension to operate upon. If ``None``, the 2025-03-17T18:45:10.5608342Z unique of the flattened input is returned. Otherwise, each of the 2025-03-17T18:45:10.5608888Z tensors indexed by the given dimension is treated as one of the 2025-03-17T18:45:10.5609440Z elements to apply the unique operation upon. See examples for more 2025-03-17T18:45:10.5609900Z details. Default: ``None`` 2025-03-17T18:45:10.5610130Z 2025-03-17T18:45:10.5610225Z Returns: 2025-03-17T18:45:10.5610666Z (Tensor, Tensor (optional), Tensor (optional)): A tensor or a tuple of tensors containing 2025-03-17T18:45:10.5611092Z 2025-03-17T18:45:10.5611291Z - **output** (*Tensor*): the output list of unique scalar elements. 2025-03-17T18:45:10.5611762Z - **inverse_indices** (*Tensor*): (optional) if 2025-03-17T18:45:10.5612218Z :attr:`return_inverse` is True, there will be an additional 2025-03-17T18:45:10.5612805Z returned tensor (same shape as input) representing the indices 2025-03-17T18:45:10.5613344Z for where elements in the original input map to in the output; 2025-03-17T18:45:10.5613866Z otherwise, this function will only return a single tensor. 2025-03-17T18:45:10.5614315Z - **counts** (*Tensor*): (optional) if 2025-03-17T18:45:10.5614746Z :attr:`return_counts` is True, there will be an additional 2025-03-17T18:45:10.5615253Z returned tensor (same shape as output or output.size(dim), 2025-03-17T18:45:10.5615770Z if dim was specified) representing the number of occurrences 2025-03-17T18:45:10.5616222Z for each unique value or tensor. 2025-03-17T18:45:10.5616456Z 2025-03-17T18:45:10.5616566Z Example:: 2025-03-17T18:45:10.5616703Z 2025-03-17T18:45:10.5616937Z >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long)) 2025-03-17T18:45:10.5617381Z >>> output 2025-03-17T18:45:10.5617642Z tensor([1, 2, 3]) 2025-03-17T18:45:10.5617816Z 2025-03-17T18:45:10.5617960Z >>> output, inverse_indices = torch.unique( 2025-03-17T18:45:10.5618457Z ... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True) 2025-03-17T18:45:10.5618919Z >>> output 2025-03-17T18:45:10.5619171Z tensor([1, 2, 3]) 2025-03-17T18:45:10.5619439Z >>> inverse_indices 2025-03-17T18:45:10.5619723Z tensor([0, 2, 1, 2]) 2025-03-17T18:45:10.5619914Z 2025-03-17T18:45:10.5620045Z >>> output, inverse_indices = torch.unique( 2025-03-17T18:45:10.5620657Z ... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True) 2025-03-17T18:45:10.5621123Z >>> output 2025-03-17T18:45:10.5621376Z tensor([1, 2, 3]) 2025-03-17T18:45:10.5621656Z >>> inverse_indices 2025-03-17T18:45:10.5621939Z tensor([[0, 2], 2025-03-17T18:45:10.5622200Z [1, 2]]) 2025-03-17T18:45:10.5622375Z 2025-03-17T18:45:10.5622484Z >>> a = torch.tensor([ 2025-03-17T18:45:10.5622772Z ... [ 2025-03-17T18:45:10.5623018Z ... [1, 1, 0, 0], 2025-03-17T18:45:10.5623313Z ... [1, 1, 0, 0], 2025-03-17T18:45:10.5623602Z ... [0, 0, 1, 1], 2025-03-17T18:45:10.5623883Z ... ], 2025-03-17T18:45:10.5624124Z ... [ 2025-03-17T18:45:10.5624367Z ... [0, 0, 1, 1], 2025-03-17T18:45:10.5624643Z ... [0, 0, 1, 1], 2025-03-17T18:45:10.5624930Z ... [1, 1, 1, 1], 2025-03-17T18:45:10.5625217Z ... ], 2025-03-17T18:45:10.5625461Z ... [ 2025-03-17T18:45:10.5625704Z ... [1, 1, 0, 0], 2025-03-17T18:45:10.5625995Z ... [1, 1, 0, 0], 2025-03-17T18:45:10.5626285Z ... [0, 0, 1, 1], 2025-03-17T18:45:10.5626647Z ... ], 2025-03-17T18:45:10.5626891Z ... ]) 2025-03-17T18:45:10.5627026Z 2025-03-17T18:45:10.5627268Z >>> # If we call `torch.unique(a, dim=0)`, each of the tensors `a[idx, :, :]` 2025-03-17T18:45:10.5627834Z >>> # will be compared. We can see that `a[0, :, :]` and `a[2, :, :]` match 2025-03-17T18:45:10.5628314Z >>> # each other, so one of them will be removed. 2025-03-17T18:45:10.5628685Z >>> (a[0, :, :] == a[2, :, :]).all() 2025-03-17T18:45:10.5629005Z tensor(True) 2025-03-17T18:45:10.5629298Z >>> a_unique_dim0 = torch.unique(a, dim=0) 2025-03-17T18:45:10.5629649Z >>> a_unique_dim0 2025-03-17T18:45:10.5629928Z tensor([[[0, 0, 1, 1], 2025-03-17T18:45:10.5630209Z [0, 0, 1, 1], 2025-03-17T18:45:10.5630495Z [1, 1, 1, 1]], 2025-03-17T18:45:10.5630783Z [[1, 1, 0, 0], 2025-03-17T18:45:10.5631069Z [1, 1, 0, 0], 2025-03-17T18:45:10.5631357Z [0, 0, 1, 1]]]) 2025-03-17T18:45:10.5631556Z 2025-03-17T18:45:10.5631857Z >>> # Notice which sub-tensors from `a` match with the sub-tensors from 2025-03-17T18:45:10.5632313Z >>> # `a_unique_dim0`: 2025-03-17T18:45:10.5632641Z >>> (a_unique_dim0[0, :, :] == a[1, :, :]).all() 2025-03-17T18:45:10.5632993Z tensor(True) 2025-03-17T18:45:10.5633285Z >>> (a_unique_dim0[1, :, :] == a[0, :, :]).all() 2025-03-17T18:45:10.5633633Z tensor(True) 2025-03-17T18:45:10.5633789Z 2025-03-17T18:45:10.5634017Z >>> # For `torch.unique(a, dim=1)`, each of the tensors `a[:, idx, :]` are 2025-03-17T18:45:10.5634552Z >>> # compared. `a[:, 0, :]` and `a[:, 1, :]` match each other, so one of 2025-03-17T18:45:10.5634984Z >>> # them will be removed. 2025-03-17T18:45:10.5635310Z >>> (a[:, 0, :] == a[:, 1, :]).all() 2025-03-17T18:45:10.5635636Z tensor(True) 2025-03-17T18:45:10.5635913Z >>> torch.unique(a, dim=1) 2025-03-17T18:45:10.5636235Z tensor([[[0, 0, 1, 1], 2025-03-17T18:45:10.5636521Z [1, 1, 0, 0]], 2025-03-17T18:45:10.5636977Z [[1, 1, 1, 1], 2025-03-17T18:45:10.5637279Z [0, 0, 1, 1]], 2025-03-17T18:45:10.5637565Z [[0, 0, 1, 1], 2025-03-17T18:45:10.5637850Z [1, 1, 0, 0]]]) 2025-03-17T18:45:10.5638050Z 2025-03-17T18:45:10.5638267Z >>> # For `torch.unique(a, dim=2)`, the tensors `a[:, :, idx]` are compared. 2025-03-17T18:45:10.5638778Z >>> # `a[:, :, 0]` and `a[:, :, 1]` match each other. Also, `a[:, :, 2]` and 2025-03-17T18:45:10.5639248Z >>> # `a[:, :, 3]` match each other as well. So in this case, two of the 2025-03-17T18:45:10.5639677Z >>> # sub-tensors will be removed. 2025-03-17T18:45:10.5640149Z >>> (a[:, :, 0] == a[:, :, 1]).all() 2025-03-17T18:45:10.5640471Z tensor(True) 2025-03-17T18:45:10.5640744Z >>> (a[:, :, 2] == a[:, :, 3]).all() 2025-03-17T18:45:10.5641067Z tensor(True) 2025-03-17T18:45:10.5641345Z >>> torch.unique(a, dim=2) 2025-03-17T18:45:10.5641660Z tensor([[[0, 1], 2025-03-17T18:45:10.5641938Z [0, 1], 2025-03-17T18:45:10.5642208Z [1, 0]], 2025-03-17T18:45:10.5642476Z [[1, 0], 2025-03-17T18:45:10.5642740Z [1, 0], 2025-03-17T18:45:10.5642992Z [1, 1]], 2025-03-17T18:45:10.5643260Z [[0, 1], 2025-03-17T18:45:10.5643520Z [0, 1], 2025-03-17T18:45:10.5643781Z [1, 0]]]) 2025-03-17T18:45:10.5644047Z 2025-03-17T18:45:10.5644434Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5644825Z 2025-03-17T18:45:10.5747887Z msg = Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=560. 2025-03-17T18:45:10.5748832Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5749299Z 2025-03-17T18:45:10.5749469Z Load a model from a github repo or a local directory. 2025-03-17T18:45:10.5749818Z 2025-03-17T18:45:10.5750064Z Note: Loading a model is the typical use case, but this can also be used to 2025-03-17T18:45:10.5750750Z for loading other objects such as tokenizers, loss functions, etc. 2025-03-17T18:45:10.5751205Z 2025-03-17T18:45:10.5751421Z If ``source`` is 'github', ``repo_or_dir`` is expected to be 2025-03-17T18:45:10.5751957Z of the form ``repo_owner/repo_name[:ref]`` with an optional 2025-03-17T18:45:10.5752362Z ref (a tag or a branch). 2025-03-17T18:45:10.5752547Z 2025-03-17T18:45:10.5752721Z If ``source`` is 'local', ``repo_or_dir`` is expected to be a 2025-03-17T18:45:10.5753129Z path to a local directory. 2025-03-17T18:45:10.5753331Z 2025-03-17T18:45:10.5753419Z Args: 2025-03-17T18:45:10.5753685Z repo_or_dir (str): If ``source`` is 'github', 2025-03-17T18:45:10.5754412Z this should correspond to a github repo with format ``repo_owner/repo_name[:ref]`` with 2025-03-17T18:45:10.5755274Z an optional ref (tag or branch), for example 'pytorch/vision:0.10'. If ``ref`` is not specified, 2025-03-17T18:45:10.5756176Z the default branch is assumed to be ``main`` if it exists, and otherwise ``master``. 2025-03-17T18:45:10.5756791Z If ``source`` is 'local' then it should be a path to a local directory. 2025-03-17T18:45:10.5757331Z model (str): the name of a callable (entrypoint) defined in the 2025-03-17T18:45:10.5757772Z repo/dir's ``hubconf.py``. 2025-03-17T18:45:10.5758198Z *args (optional): the corresponding args for callable ``model``. 2025-03-17T18:45:10.5758819Z source (str, optional): 'github' or 'local'. Specifies how 2025-03-17T18:45:10.5759306Z ``repo_or_dir`` is to be interpreted. Default is 'github'. 2025-03-17T18:45:10.5759986Z trust_repo (bool, str or None): ``"check"``, ``True``, ``False`` or ``None``. 2025-03-17T18:45:10.5761019Z This parameter was introduced in v1.12 and helps ensuring that users 2025-03-17T18:45:10.5761516Z only run code from repos that they trust. 2025-03-17T18:45:10.5761764Z 2025-03-17T18:45:10.5761983Z - If ``False``, a prompt will ask the user whether the repo should 2025-03-17T18:45:10.5762405Z be trusted. 2025-03-17T18:45:10.5762768Z - If ``True``, the repo will be added to the trusted list and loaded 2025-03-17T18:45:10.5763229Z without requiring explicit confirmation. 2025-03-17T18:45:10.5763664Z - If ``"check"``, the repo will be checked against the list of 2025-03-17T18:45:10.5764184Z trusted repos in the cache. If it is not present in that list, the 2025-03-17T18:45:10.5764733Z behaviour will fall back onto the ``trust_repo=False`` option. 2025-03-17T18:45:10.5765362Z - If ``None``: this will raise a warning, inviting the user to set 2025-03-17T18:45:10.5765864Z ``trust_repo`` to either ``False``, ``True`` or ``"check"``. This 2025-03-17T18:45:10.5766402Z is only present for backward compatibility and will be removed in 2025-03-17T18:45:10.5766840Z v2.0. 2025-03-17T18:45:10.5766996Z 2025-03-17T18:45:10.5767213Z Default is ``None`` and will eventually change to ``"check"`` in v2.0. 2025-03-17T18:45:10.5767771Z force_reload (bool, optional): whether to force a fresh download of 2025-03-17T18:45:10.5768309Z the github repo unconditionally. Does not have any effect if 2025-03-17T18:45:10.5768760Z ``source = 'local'``. Default is ``False``. 2025-03-17T18:45:10.5769222Z verbose (bool, optional): If ``False``, mute messages about hitting 2025-03-17T18:45:10.5769774Z local caches. Note that the message about first download cannot be 2025-03-17T18:45:10.5770288Z muted. Does not have any effect if ``source = 'local'``. 2025-03-17T18:45:10.5770706Z Default is ``True``. 2025-03-17T18:45:10.5771204Z skip_validation (bool, optional): if ``False``, torchhub will check that the branch or commit 2025-03-17T18:45:10.5771933Z specified by the ``github`` argument properly belongs to the repo owner. This will make 2025-03-17T18:45:10.5772645Z requests to the GitHub API; you can specify a non-default GitHub token by setting the 2025-03-17T18:45:10.5773242Z ``GITHUB_TOKEN`` environment variable. Default is ``False``. 2025-03-17T18:45:10.5773778Z **kwargs (optional): the corresponding kwargs for callable ``model``. 2025-03-17T18:45:10.5774119Z 2025-03-17T18:45:10.5774229Z Returns: 2025-03-17T18:45:10.5774561Z The output of the ``model`` callable when called with the given 2025-03-17T18:45:10.5774989Z ``*args`` and ``**kwargs``. 2025-03-17T18:45:10.5775182Z 2025-03-17T18:45:10.5775292Z Example: 2025-03-17T18:45:10.5775571Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:10.5775949Z >>> # from a github repo 2025-03-17T18:45:10.5776253Z >>> repo = "pytorch/vision" 2025-03-17T18:45:10.5776568Z >>> model = torch.hub.load( 2025-03-17T18:45:10.5776958Z ... repo, "resnet50", weights="ResNet50_Weights.IMAGENET1K_V1" 2025-03-17T18:45:10.5777348Z ... ) 2025-03-17T18:45:10.5777587Z >>> # from a local directory 2025-03-17T18:45:10.5777987Z >>> path = "/some/local/path/pytorch/vision" 2025-03-17T18:45:10.5778347Z >>> # xdoctest: +SKIP 2025-03-17T18:45:10.5778784Z >>> model = torch.hub.load(path, "resnet50", weights="ResNet50_Weights.DEFAULT") 2025-03-17T18:45:10.5779173Z 2025-03-17T18:45:10.5779432Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5779823Z 2025-03-17T18:45:10.5780280Z msg = Cannot scrape callname=_load_local in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=652. 2025-03-17T18:45:10.5781135Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5781540Z 2025-03-17T18:45:10.5781720Z Load a model from a local directory with a ``hubconf.py``. 2025-03-17T18:45:10.5782033Z 2025-03-17T18:45:10.5782121Z Args: 2025-03-17T18:45:10.5782442Z hubconf_dir (str): path to a local directory that contains a 2025-03-17T18:45:10.5782858Z ``hubconf.py``. 2025-03-17T18:45:10.5783222Z model (str): name of an entrypoint defined in the directory's 2025-03-17T18:45:10.5783633Z ``hubconf.py``. 2025-03-17T18:45:10.5784010Z *args (optional): the corresponding args for callable ``model``. 2025-03-17T18:45:10.5784561Z **kwargs (optional): the corresponding kwargs for callable ``model``. 2025-03-17T18:45:10.5784902Z 2025-03-17T18:45:10.5785006Z Returns: 2025-03-17T18:45:10.5785313Z a single model with corresponding pretrained weights. 2025-03-17T18:45:10.5785599Z 2025-03-17T18:45:10.5785706Z Example: 2025-03-17T18:45:10.5785965Z >>> # xdoctest: +SKIP("stub local path") 2025-03-17T18:45:10.5786398Z >>> path = "/some/local/path/pytorch/vision" 2025-03-17T18:45:10.5786847Z >>> model = _load_local( 2025-03-17T18:45:10.5787132Z ... path, 2025-03-17T18:45:10.5787372Z ... "resnet50", 2025-03-17T18:45:10.5787687Z ... weights="ResNet50_Weights.IMAGENET1K_V1", 2025-03-17T18:45:10.5788042Z ... ) 2025-03-17T18:45:10.5788181Z 2025-03-17T18:45:10.5788444Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5788833Z 2025-03-17T18:45:10.5789326Z msg = Cannot scrape callname=download_url_to_file in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=691. 2025-03-17T18:45:10.5790211Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5790759Z Download object at the given URL to a local path. 2025-03-17T18:45:10.5791036Z 2025-03-17T18:45:10.5791128Z Args: 2025-03-17T18:45:10.5791396Z url (str): URL of the object to download 2025-03-17T18:45:10.5791892Z dst (str): Full path where object will be saved, e.g. ``/tmp/temporary_file`` 2025-03-17T18:45:10.5792602Z hash_prefix (str, optional): If not None, the SHA256 downloaded file should start with ``hash_prefix``. 2025-03-17T18:45:10.5793172Z Default: None 2025-03-17T18:45:10.5793612Z progress (bool, optional): whether or not to display a progress bar to stderr 2025-03-17T18:45:10.5794101Z Default: True 2025-03-17T18:45:10.5794285Z 2025-03-17T18:45:10.5794378Z Example: 2025-03-17T18:45:10.5794660Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:10.5795037Z >>> # xdoctest: +REQUIRES(POSIX) 2025-03-17T18:45:10.5795388Z >>> torch.hub.download_url_to_file( 2025-03-17T18:45:10.5795861Z ... "https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth", 2025-03-17T18:45:10.5796326Z ... "/tmp/temporary_file", 2025-03-17T18:45:10.5796641Z ... ) 2025-03-17T18:45:10.5796776Z 2025-03-17T18:45:10.5796876Z 2025-03-17T18:45:10.5797259Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5797638Z 2025-03-17T18:45:10.5798174Z msg = Cannot scrape callname=load_state_dict_from_url in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=816. 2025-03-17T18:45:10.5799158Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5799711Z Loads the Torch serialized object at the given URL. 2025-03-17T18:45:10.5799980Z 2025-03-17T18:45:10.5800180Z If downloaded file is a zip file, it will be automatically 2025-03-17T18:45:10.5800589Z decompressed. 2025-03-17T18:45:10.5800740Z 2025-03-17T18:45:10.5800975Z If the object is already present in `model_dir`, it's deserialized and 2025-03-17T18:45:10.5801418Z returned. 2025-03-17T18:45:10.5801781Z The default value of ``model_dir`` is ``/checkpoints`` where 2025-03-17T18:45:10.5802337Z ``hub_dir`` is the directory returned by :func:`~torch.hub.get_dir`. 2025-03-17T18:45:10.5802657Z 2025-03-17T18:45:10.5802760Z Args: 2025-03-17T18:45:10.5803010Z url (str): URL of the object to download 2025-03-17T18:45:10.5803464Z model_dir (str, optional): directory in which to save the object 2025-03-17T18:45:10.5804154Z map_location (optional): a function or a dict specifying how to remap storage locations (see torch.load) 2025-03-17T18:45:10.5804906Z progress (bool, optional): whether or not to display a progress bar to stderr. 2025-03-17T18:45:10.5805401Z Default: True 2025-03-17T18:45:10.5805922Z check_hash(bool, optional): If True, the filename part of the URL should follow the naming convention 2025-03-17T18:45:10.5806608Z ``filename-.ext`` where ```` is the first eight or more 2025-03-17T18:45:10.5807198Z digits of the SHA256 hash of the contents of the file. The hash is used to 2025-03-17T18:45:10.5807814Z ensure unique names and to verify the contents of the file. 2025-03-17T18:45:10.5808240Z Default: False 2025-03-17T18:45:10.5808768Z file_name (str, optional): name for the downloaded file. Filename from ``url`` will be used if not set. 2025-03-17T18:45:10.5809577Z weights_only(bool, optional): If True, only weights will be loaded and no complex pickled objects. 2025-03-17T18:45:10.5810305Z Recommended for untrusted sources. See :func:`~torch.load` for more details. 2025-03-17T18:45:10.5810706Z 2025-03-17T18:45:10.5810800Z Example: 2025-03-17T18:45:10.5811087Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:10.5811505Z >>> state_dict = torch.hub.load_state_dict_from_url( 2025-03-17T18:45:10.5812009Z ... "https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth" 2025-03-17T18:45:10.5812453Z ... ) 2025-03-17T18:45:10.5812587Z 2025-03-17T18:45:10.5812691Z 2025-03-17T18:45:10.5813086Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.5813463Z 2025-03-17T18:45:10.5839374Z msg = Cannot scrape callname=Library.fallback in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=376. 2025-03-17T18:45:10.5840344Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:10.5840986Z Registers the function implementation as the fallback for the given key. 2025-03-17T18:45:10.5841366Z 2025-03-17T18:45:10.5841607Z This function only works for a library with global namespace ("_"). 2025-03-17T18:45:10.5841952Z 2025-03-17T18:45:10.5842061Z Args: 2025-03-17T18:45:10.5842498Z fn: function used as fallback for the given dispatch key or :func:`~fallthrough_kernel` 2025-03-17T18:45:10.5843040Z to register a fallthrough. 2025-03-17T18:45:10.5843616Z dispatch_key: dispatch key that the input function should be registered for. By default, it uses 2025-03-17T18:45:10.5844258Z the dispatch key that the library was created with. 2025-03-17T18:45:10.5844943Z with_keyset: flag controlling if the current dispatcher call keyset should be passed as the first argument 2025-03-17T18:45:10.5846399Z to :attr:`fn` when calling. This should be used to create the appropriate keyset for redispatch calls. 2025-03-17T18:45:10.5846868Z 2025-03-17T18:45:10.5846995Z Example:: 2025-03-17T18:45:10.5847275Z >>> my_lib = Library("_", "IMPL") 2025-03-17T18:45:10.5847660Z >>> def fallback_kernel(op, *args, **kwargs): 2025-03-17T18:45:10.5848057Z >>> # Handle all autocast ops generically 2025-03-17T18:45:10.5848411Z >>> # ... 2025-03-17T18:45:10.5848734Z >>> my_lib.fallback(fallback_kernel, "Autocast") 2025-03-17T18:45:10.5849083Z 2025-03-17T18:45:10.5849860Z Original Error: IndentationError('expected an indented block after function definition on line 2', ('', 5, 1, 'my_lib.fallback(fallback_kernel, "Autocast")\n', 5, 7)) 2025-03-17T18:45:10.5850629Z 2025-03-17T18:45:10.5850767Z my_lib.fallback(fallback_kernel, "Autocast") 2025-03-17T18:45:10.5851108Z ^ 2025-03-17T18:45:10.5919425Z msg = Cannot scrape callname=register_fake in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=920. 2025-03-17T18:45:10.5920374Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:10.5920995Z Register a FakeTensor implementation ("fake impl") for this operator. 2025-03-17T18:45:10.5921348Z 2025-03-17T18:45:10.5921551Z Also sometimes known as a "meta kernel", "abstract impl". 2025-03-17T18:45:10.5921850Z 2025-03-17T18:45:10.5922118Z An "FakeTensor implementation" specifies the behavior of this operator on 2025-03-17T18:45:10.5922732Z Tensors that carry no data ("FakeTensor"). Given some input Tensors with 2025-03-17T18:45:10.5923333Z certain properties (sizes/strides/storage_offset/device), it specifies 2025-03-17T18:45:10.5923987Z what the properties of the output Tensors are. 2025-03-17T18:45:10.5924249Z 2025-03-17T18:45:10.5924500Z The FakeTensor implementation has the same signature as the operator. 2025-03-17T18:45:10.5925083Z It is run for both FakeTensors and meta tensors. To write a FakeTensor 2025-03-17T18:45:10.5925646Z implementation, assume that all Tensor inputs to the operator are 2025-03-17T18:45:10.5926209Z regular CPU/CUDA/Meta tensors, but they do not have storage, and 2025-03-17T18:45:10.5926757Z you are trying to return regular CPU/CUDA/Meta tensor(s) as output. 2025-03-17T18:45:10.5927330Z The FakeTensor implementation must consist of only PyTorch operations 2025-03-17T18:45:10.5927892Z (and may not directly access the storage or data of any input or 2025-03-17T18:45:10.5928322Z intermediate Tensors). 2025-03-17T18:45:10.5928554Z 2025-03-17T18:45:10.5928800Z This API may be used as a decorator (see examples). 2025-03-17T18:45:10.5929135Z 2025-03-17T18:45:10.5929335Z For a detailed guide on custom ops, please see 2025-03-17T18:45:10.5929979Z https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html 2025-03-17T18:45:10.5930678Z 2025-03-17T18:45:10.5930780Z Examples: 2025-03-17T18:45:10.5931033Z >>> import torch 2025-03-17T18:45:10.5931324Z >>> import numpy as np 2025-03-17T18:45:10.5931633Z >>> from torch import Tensor 2025-03-17T18:45:10.5931941Z >>> 2025-03-17T18:45:10.5932273Z >>> # Example 1: an operator without data-dependent output shape 2025-03-17T18:45:10.5932825Z >>> @torch.library.custom_op("mylib::custom_linear", mutates_args=()) 2025-03-17T18:45:10.5933398Z >>> def custom_linear(x: Tensor, weight: Tensor, bias: Tensor) -> Tensor: 2025-03-17T18:45:10.5933936Z >>> raise NotImplementedError("Implementation goes here") 2025-03-17T18:45:10.5934337Z >>> 2025-03-17T18:45:10.5934647Z >>> @torch.library.register_fake("mylib::custom_linear") 2025-03-17T18:45:10.5935056Z >>> def _(x, weight, bias): 2025-03-17T18:45:10.5935379Z >>> assert x.dim() == 2 2025-03-17T18:45:10.5935705Z >>> assert weight.dim() == 2 2025-03-17T18:45:10.5936032Z >>> assert bias.dim() == 1 2025-03-17T18:45:10.5936384Z >>> assert x.shape[1] == weight.shape[1] 2025-03-17T18:45:10.5937059Z >>> assert weight.shape[0] == bias.shape[0] 2025-03-17T18:45:10.5937453Z >>> assert x.device == weight.device 2025-03-17T18:45:10.5937790Z >>> 2025-03-17T18:45:10.5938049Z >>> return (x @ weight.t()) + bias 2025-03-17T18:45:10.5938378Z >>> 2025-03-17T18:45:10.5938691Z >>> with torch._subclasses.fake_tensor.FakeTensorMode(): 2025-03-17T18:45:10.5939105Z >>> x = torch.randn(2, 3) 2025-03-17T18:45:10.5939432Z >>> w = torch.randn(3, 3) 2025-03-17T18:45:10.5939753Z >>> b = torch.randn(3) 2025-03-17T18:45:10.5940112Z >>> y = torch.ops.mylib.custom_linear(x, w, b) 2025-03-17T18:45:10.5940470Z >>> 2025-03-17T18:45:10.5940710Z >>> assert y.shape == (2, 3) 2025-03-17T18:45:10.5941020Z >>> 2025-03-17T18:45:10.5941339Z >>> # Example 2: an operator with data-dependent output shape 2025-03-17T18:45:10.5941883Z >>> @torch.library.custom_op("mylib::custom_nonzero", mutates_args=()) 2025-03-17T18:45:10.5942374Z >>> def custom_nonzero(x: Tensor) -> Tensor: 2025-03-17T18:45:10.5942733Z >>> x_np = x.numpy(force=True) 2025-03-17T18:45:10.5943100Z >>> res = np.stack(np.nonzero(x_np), axis=1) 2025-03-17T18:45:10.5943497Z >>> return torch.tensor(res, device=x.device) 2025-03-17T18:45:10.5943849Z >>> 2025-03-17T18:45:10.5944164Z >>> @torch.library.register_fake("mylib::custom_nonzero") 2025-03-17T18:45:10.5944559Z >>> def _(x): 2025-03-17T18:45:10.5944879Z >>> # Number of nonzero-elements is data-dependent. 2025-03-17T18:45:10.5945407Z >>> # Since we cannot peek at the data in an fake impl, 2025-03-17T18:45:10.5945859Z >>> # we use the ctx object to construct a new symint that 2025-03-17T18:45:10.5946290Z >>> # represents the data-dependent size. 2025-03-17T18:45:10.5946740Z >>> ctx = torch.library.get_ctx() 2025-03-17T18:45:10.5947114Z >>> nnz = ctx.new_dynamic_size() 2025-03-17T18:45:10.5947464Z >>> shape = [nnz, x.dim()] 2025-03-17T18:45:10.5947844Z >>> result = x.new_empty(shape, dtype=torch.int64) 2025-03-17T18:45:10.5948230Z >>> return result 2025-03-17T18:45:10.5948518Z >>> 2025-03-17T18:45:10.5966900Z >>> from torch.fx.experimental.proxy_tensor import make_fx 2025-03-17T18:45:10.5967352Z >>> 2025-03-17T18:45:10.5967617Z >>> x = torch.tensor([0, 1, 2, 3, 4, 0]) 2025-03-17T18:45:10.5968122Z >>> trace = make_fx(torch.ops.mylib.custom_nonzero, tracing_mode="symbolic")(x) 2025-03-17T18:45:10.5968638Z >>> trace.print_readable() 2025-03-17T18:45:10.5968933Z >>> 2025-03-17T18:45:10.5969286Z >>> assert torch.allclose(trace(x), torch.ops.mylib.custom_nonzero(x)) 2025-03-17T18:45:10.5969642Z 2025-03-17T18:45:10.5969747Z 2025-03-17T18:45:10.5970416Z Original Error: IndentationError('expected an indented block after function definition on line 37', ('', 38, 1, '_._ = None\n', 38, 2)) 2025-03-17T18:45:10.5971070Z 2025-03-17T18:45:10.5971174Z _._ = None 2025-03-17T18:45:10.5971392Z ^ 2025-03-17T18:45:10.5972071Z msg = Cannot scrape callname=register_autograd in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=1041. 2025-03-17T18:45:10.5972975Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.5973523Z Register a backward formula for this custom op. 2025-03-17T18:45:10.5973783Z 2025-03-17T18:45:10.5974097Z In order for an operator to work with autograd, you need to register 2025-03-17T18:45:10.5974805Z a backward formula: 2025-03-17T18:45:10.5975450Z 1. You must tell us how to compute gradients during the backward pass 2025-03-17T18:45:10.5976172Z by providing us a "backward" function. 2025-03-17T18:45:10.5976888Z 2. If you need any values from the forward to compute gradients, you can 2025-03-17T18:45:10.5977912Z use `setup_context` to save values for backward. 2025-03-17T18:45:10.5978197Z 2025-03-17T18:45:10.5978436Z ``backward`` runs during the backward pass. It accepts ``(ctx, *grads)``: 2025-03-17T18:45:10.5979007Z - ``grads`` is one or more gradients. The number of gradients matches 2025-03-17T18:45:10.5979469Z the number of outputs of the operator. 2025-03-17T18:45:10.5979935Z The ``ctx`` object is `the same ctx object `_ used by 2025-03-17T18:45:10.5980534Z :class:`torch.autograd.Function`. The semantics of ``backward_fn`` are the 2025-03-17T18:45:10.5981074Z same as :meth:`torch.autograd.Function.backward`. 2025-03-17T18:45:10.5981368Z 2025-03-17T18:45:10.5981586Z ``setup_context(ctx, inputs, output)`` runs during the forward pass. 2025-03-17T18:45:10.5982166Z Please save quantities needed for backward onto the ``ctx`` object via 2025-03-17T18:45:10.5982770Z either :meth:`torch.autograd.function.FunctionCtx.save_for_backward` 2025-03-17T18:45:10.5983350Z or assigning them as attributes of ``ctx``. If your custom op has 2025-03-17T18:45:10.5983902Z kwarg-only arguments, we expect the signature of ``setup_context`` 2025-03-17T18:45:10.5984446Z to be ``setup_context(ctx, inputs, keyword_only_inputs, output)``. 2025-03-17T18:45:10.5984775Z 2025-03-17T18:45:10.5984998Z Both ``setup_context_fn`` and ``backward_fn`` must be traceable. That is, 2025-03-17T18:45:10.5985580Z they may not directly access :meth:`torch.Tensor.data_ptr` and they must 2025-03-17T18:45:10.5986179Z not depend on or mutate global state. If you need a non-traceable backward, 2025-03-17T18:45:10.5986936Z you can make it a separate custom_op that you call inside ``backward_fn``. 2025-03-17T18:45:10.5987300Z 2025-03-17T18:45:10.5987525Z If you need different autograd behavior on different devices, then we 2025-03-17T18:45:10.5988119Z recommend creating two different custom operators, one for each device 2025-03-17T18:45:10.5988728Z that needs different behavior, and switching between them at runtime. 2025-03-17T18:45:10.5989076Z 2025-03-17T18:45:10.5989183Z Examples: 2025-03-17T18:45:10.5989433Z >>> import torch 2025-03-17T18:45:10.5989720Z >>> import numpy as np 2025-03-17T18:45:10.5990033Z >>> from torch import Tensor 2025-03-17T18:45:10.5990344Z >>> 2025-03-17T18:45:10.5990704Z >>> @torch.library.custom_op("mylib::numpy_sin", mutates_args=()) 2025-03-17T18:45:10.5991168Z >>> def numpy_sin(x: Tensor) -> Tensor: 2025-03-17T18:45:10.5991529Z >>> x_np = x.cpu().numpy() 2025-03-17T18:45:10.5991845Z >>> y_np = np.sin(x_np) 2025-03-17T18:45:10.5992226Z >>> return torch.from_numpy(y_np).to(device=x.device) 2025-03-17T18:45:10.5992602Z >>> 2025-03-17T18:45:10.5992899Z >>> def setup_context(ctx, inputs, output) -> Tensor: 2025-03-17T18:45:10.5993279Z >>> x, = inputs 2025-03-17T18:45:10.5993573Z >>> ctx.save_for_backward(x) 2025-03-17T18:45:10.5993891Z >>> 2025-03-17T18:45:10.5994139Z >>> def backward(ctx, grad): 2025-03-17T18:45:10.5994467Z >>> x, = ctx.saved_tensors 2025-03-17T18:45:10.5994797Z >>> return grad * x.cos() 2025-03-17T18:45:10.5995106Z >>> 2025-03-17T18:45:10.5995367Z >>> torch.library.register_autograd( 2025-03-17T18:45:10.5995797Z ... "mylib::numpy_sin", backward, setup_context=setup_context 2025-03-17T18:45:10.5996200Z ... ) 2025-03-17T18:45:10.5996431Z >>> 2025-03-17T18:45:10.5996691Z >>> x = torch.randn(3, requires_grad=True) 2025-03-17T18:45:10.5997039Z >>> y = numpy_sin(x) 2025-03-17T18:45:10.5997406Z >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) 2025-03-17T18:45:10.5997840Z >>> assert torch.allclose(grad_x, x.cos()) 2025-03-17T18:45:10.5998185Z >>> 2025-03-17T18:45:10.5998447Z >>> # Example with a keyword-only arg 2025-03-17T18:45:10.5998969Z >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) 2025-03-17T18:45:10.5999468Z >>> def numpy_mul(x: Tensor, *, val: float) -> Tensor: 2025-03-17T18:45:10.5999862Z >>> x_np = x.cpu().numpy() 2025-03-17T18:45:10.6000192Z >>> y_np = x_np * val 2025-03-17T18:45:10.6000559Z >>> return torch.from_numpy(y_np).to(device=x.device) 2025-03-17T18:45:10.6000937Z >>> 2025-03-17T18:45:10.6001307Z >>> def setup_context(ctx, inputs, keyword_only_inputs, output) -> Tensor: 2025-03-17T18:45:10.6001805Z >>> ctx.val = keyword_only_inputs["val"] 2025-03-17T18:45:10.6002147Z >>> 2025-03-17T18:45:10.6002395Z >>> def backward(ctx, grad): 2025-03-17T18:45:10.6002721Z >>> return grad * ctx.val 2025-03-17T18:45:10.6003026Z >>> 2025-03-17T18:45:10.6003285Z >>> torch.library.register_autograd( 2025-03-17T18:45:10.6003716Z ... "mylib::numpy_mul", backward, setup_context=setup_context 2025-03-17T18:45:10.6004103Z ... ) 2025-03-17T18:45:10.6004336Z >>> 2025-03-17T18:45:10.6004602Z >>> x = torch.randn(3, requires_grad=True) 2025-03-17T18:45:10.6004966Z >>> y = numpy_mul(x, val=3.14) 2025-03-17T18:45:10.6005366Z >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) 2025-03-17T18:45:10.6005850Z >>> assert torch.allclose(grad_x, torch.full_like(x, 3.14)) 2025-03-17T18:45:10.6006162Z 2025-03-17T18:45:10.6006252Z 2025-03-17T18:45:10.6006640Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.6007029Z 2025-03-17T18:45:10.6007537Z msg = Cannot scrape callname=opcheck in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=1455. 2025-03-17T18:45:10.6008461Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.6009078Z Given an operator and some sample arguments, tests if the operator is 2025-03-17T18:45:10.6009536Z registered correctly. 2025-03-17T18:45:10.6009729Z 2025-03-17T18:45:10.6009951Z That is, when you use the torch.library/TORCH_LIBRARY APIs to create a 2025-03-17T18:45:10.6010551Z custom op, you specified metadata (e.g. mutability info) about the custom op 2025-03-17T18:45:10.6011167Z and these APIs require that the functions you pass them satisfy certain 2025-03-17T18:45:10.6011761Z properties (e.g. no data pointer access in the fake/meta/abstract kernel) 2025-03-17T18:45:10.6012277Z ``opcheck`` tests these metadata and properties. 2025-03-17T18:45:10.6012556Z 2025-03-17T18:45:10.6012677Z Concretely, we test the following: 2025-03-17T18:45:10.6012920Z 2025-03-17T18:45:10.6013105Z - test_schema: If the schema matches the implementation of 2025-03-17T18:45:10.6013644Z the operator. For example: if the schema specifies a Tensor is mutated, 2025-03-17T18:45:10.6014219Z then we check the implementation mutates the Tensor. If the schema 2025-03-17T18:45:10.6014755Z specifies that we return a new Tensor, then we check that the 2025-03-17T18:45:10.6015305Z implementation returns a new Tensor (instead of an existing one or 2025-03-17T18:45:10.6015768Z a view of an existing one). 2025-03-17T18:45:10.6016187Z - test_autograd_registration: If the operator supports training 2025-03-17T18:45:10.6016719Z (autograd): we check that its autograd formula is registered via 2025-03-17T18:45:10.6017265Z torch.library.register_autograd or a manual registration to one 2025-03-17T18:45:10.6017818Z or more DispatchKey::Autograd keys. Any other DispatchKey-based 2025-03-17T18:45:10.6018296Z registrations may lead to undefined behavior. 2025-03-17T18:45:10.6018756Z - test_faketensor: If the operator has a FakeTensor kernel 2025-03-17T18:45:10.6019249Z (and if it is correct). The FakeTensor kernel is necessary ( 2025-03-17T18:45:10.6019781Z but not sufficient) for the operator to work with PyTorch compilation 2025-03-17T18:45:10.6020417Z APIs (torch.compile/export/FX). We check that a FakeTensor kernel 2025-03-17T18:45:10.6020967Z (also sometimes known as a meta kernel) was registered for the 2025-03-17T18:45:10.6021494Z operator and that it is correct. This test takes the result of 2025-03-17T18:45:10.6022021Z running the operator on real tensors and the result of running 2025-03-17T18:45:10.6022536Z the operator on FakeTensors and checks that they have the same 2025-03-17T18:45:10.6023017Z Tensor metadata (sizes/strides/dtype/device/etc). 2025-03-17T18:45:10.6023498Z - test_aot_dispatch_dynamic: If the operator has correct behavior 2025-03-17T18:45:10.6024019Z with PyTorch compilation APIs (torch.compile/export/FX). 2025-03-17T18:45:10.6024543Z This checks that the outputs (and gradients, if applicable) are the 2025-03-17T18:45:10.6025044Z same under eager-mode PyTorch and torch.compile. 2025-03-17T18:45:10.6025520Z This test is a superset of ``test_faketensor`` and is an e2e test; 2025-03-17T18:45:10.6026019Z other things it tests are that the operator supports 2025-03-17T18:45:10.6026642Z functionalization and that the backward pass (if it exists) also 2025-03-17T18:45:10.6027143Z supports FakeTensor and functionalization. 2025-03-17T18:45:10.6027417Z 2025-03-17T18:45:10.6027630Z For best results, please call ``opcheck`` multiple times with a 2025-03-17T18:45:10.6028147Z representative set of inputs. If your operator supports 2025-03-17T18:45:10.6028702Z autograd, please use ``opcheck`` with inputs with ``requires_grad = True``; 2025-03-17T18:45:10.6029303Z if your operator supports multiple devices (e.g. CPU and CUDA), please 2025-03-17T18:45:10.6029890Z use ``opcheck`` with inputs on all supported devices. 2025-03-17T18:45:10.6030168Z 2025-03-17T18:45:10.6030274Z Args: 2025-03-17T18:45:10.6030594Z op: The operator. Must either be a function decorated with 2025-03-17T18:45:10.6031122Z :func:`torch.library.custom_op` or an OpOverload/OpOverloadPacket 2025-03-17T18:45:10.6031696Z found in torch.ops.* (e.g. torch.ops.aten.sin, torch.ops.mylib.foo) 2025-03-17T18:45:10.6032162Z args: The args to the operator 2025-03-17T18:45:10.6032512Z kwargs: The kwargs to the operator 2025-03-17T18:45:10.6032939Z test_utils: Tests that we should run. Default: all of them. 2025-03-17T18:45:10.6033394Z Example: ("test_schema", "test_faketensor") 2025-03-17T18:45:10.6033858Z raise_exception: If we should raise an exception on the first 2025-03-17T18:45:10.6034357Z error. If False, we will return a dict with information 2025-03-17T18:45:10.6034781Z on if each test passed or not. 2025-03-17T18:45:10.6035253Z rtol (Optional[float]): Relative tolerance for floating point comparisons. 2025-03-17T18:45:10.6035773Z If specified ``atol`` must also be specified. 2025-03-17T18:45:10.6036236Z If omitted, default values based on the ``dtype`` are selected 2025-03-17T18:45:10.6036888Z (see the table in :func:`torch.testing.assert_close`). 2025-03-17T18:45:10.6037482Z atol (Optional[float]): Absolute tolerance for floating point comparisons. 2025-03-17T18:45:10.6038004Z If specified ``rtol`` must also be specified. 2025-03-17T18:45:10.6038473Z If omitted, default values based on the ``dtype`` are selected 2025-03-17T18:45:10.6038981Z (see the table in :func:`torch.testing.assert_close`). 2025-03-17T18:45:10.6039273Z 2025-03-17T18:45:10.6039392Z .. warning:: 2025-03-17T18:45:10.6039539Z 2025-03-17T18:45:10.6039776Z opcheck and :func:`torch.autograd.gradcheck` test different things; 2025-03-17T18:45:10.6040349Z opcheck tests if your usage of torch.library APIs is correct while 2025-03-17T18:45:10.6040916Z :func:`torch.autograd.gradcheck` tests if your autograd formula is 2025-03-17T18:45:10.6041492Z mathematically correct. Use both to test custom ops that support 2025-03-17T18:45:10.6042066Z gradient computation. 2025-03-17T18:45:10.6042265Z 2025-03-17T18:45:10.6042373Z Example: 2025-03-17T18:45:10.6042506Z 2025-03-17T18:45:10.6042664Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:10.6043136Z >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) 2025-03-17T18:45:10.6043618Z >>> def numpy_mul(x: Tensor, y: float) -> Tensor: 2025-03-17T18:45:10.6044002Z >>> x_np = x.numpy(force=True) 2025-03-17T18:45:10.6044325Z >>> z_np = x_np * y 2025-03-17T18:45:10.6044662Z >>> return torch.from_numpy(z_np).to(x.device) 2025-03-17T18:45:10.6045020Z >>> 2025-03-17T18:45:10.6045267Z >>> @numpy_mul.register_fake 2025-03-17T18:45:10.6045591Z >>> def _(x, y): 2025-03-17T18:45:10.6045888Z >>> return torch.empty_like(x) 2025-03-17T18:45:10.6046222Z >>> 2025-03-17T18:45:10.6046488Z >>> def setup_context(ctx, inputs, output): 2025-03-17T18:45:10.6046843Z >>> y, = inputs 2025-03-17T18:45:10.6047129Z >>> ctx.y = y 2025-03-17T18:45:10.6047393Z >>> 2025-03-17T18:45:10.6047636Z >>> def backward(ctx, grad): 2025-03-17T18:45:10.6047965Z >>> return grad * ctx.y, None 2025-03-17T18:45:10.6048285Z >>> 2025-03-17T18:45:10.6048647Z >>> numpy_mul.register_autograd(backward, setup_context=setup_context) 2025-03-17T18:45:10.6049092Z >>> 2025-03-17T18:45:10.6049320Z >>> sample_inputs = [ 2025-03-17T18:45:10.6049628Z >>> (torch.randn(3), 3.14), 2025-03-17T18:45:10.6049990Z >>> (torch.randn(2, 3, device='cuda'), 2.718), 2025-03-17T18:45:10.6050507Z >>> (torch.randn(1, 10, requires_grad=True), 1.234), 2025-03-17T18:45:10.6050986Z >>> (torch.randn(64, 64, device='cuda', requires_grad=True), 90.18), 2025-03-17T18:45:10.6051405Z >>> ] 2025-03-17T18:45:10.6051642Z >>> 2025-03-17T18:45:10.6051896Z >>> for args in sample_inputs: 2025-03-17T18:45:10.6052274Z >>> torch.library.opcheck(numpy_mul, args) 2025-03-17T18:45:10.6052539Z 2025-03-17T18:45:10.6052626Z 2025-03-17T18:45:10.6053009Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.6053387Z 2025-03-17T18:45:10.6388823Z msg = Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py line=1283. 2025-03-17T18:45:10.6389882Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.6390616Z load(f, map_location=None, pickle_module=pickle, *, weights_only=True, mmap=None, **pickle_load_args) 2025-03-17T18:45:10.6391099Z 2025-03-17T18:45:10.6391282Z Loads an object saved with :func:`torch.save` from a file. 2025-03-17T18:45:10.6391589Z 2025-03-17T18:45:10.6391831Z :func:`torch.load` uses Python's unpickling facilities but treats storages, 2025-03-17T18:45:10.6392433Z which underlie tensors, specially. They are first deserialized on the 2025-03-17T18:45:10.6393017Z CPU and are then moved to the device they were saved from. If this fails 2025-03-17T18:45:10.6393599Z (e.g. because the run time system doesn't have certain devices), an exception 2025-03-17T18:45:10.6394205Z is raised. However, storages can be dynamically remapped to an alternative 2025-03-17T18:45:10.6394742Z set of devices using the :attr:`map_location` argument. 2025-03-17T18:45:10.6395061Z 2025-03-17T18:45:10.6395309Z If :attr:`map_location` is a callable, it will be called once for each serialized 2025-03-17T18:45:10.6395914Z storage with two arguments: storage and location. The storage argument 2025-03-17T18:45:10.6396510Z will be the initial deserialization of the storage, residing on the CPU. 2025-03-17T18:45:10.6397093Z Each serialized storage has a location tag associated with it which 2025-03-17T18:45:10.6397653Z identifies the device it was saved from, and this tag is the second 2025-03-17T18:45:10.6398387Z argument passed to :attr:`map_location`. The builtin location tags are ``'cpu'`` 2025-03-17T18:45:10.6399010Z for CPU tensors and ``'cuda:device_id'`` (e.g. ``'cuda:2'``) for CUDA tensors. 2025-03-17T18:45:10.6399574Z :attr:`map_location` should return either ``None`` or a storage. If 2025-03-17T18:45:10.6400169Z :attr:`map_location` returns a storage, it will be used as the final deserialized 2025-03-17T18:45:10.6400810Z object, already moved to the right device. Otherwise, :func:`torch.load` will 2025-03-17T18:45:10.6401438Z fall back to the default behavior, as if :attr:`map_location` wasn't specified. 2025-03-17T18:45:10.6401814Z 2025-03-17T18:45:10.6402067Z If :attr:`map_location` is a :class:`torch.device` object or a string containing 2025-03-17T18:45:10.6402678Z a device tag, it indicates the location where all tensors should be loaded. 2025-03-17T18:45:10.6403036Z 2025-03-17T18:45:10.6403314Z Otherwise, if :attr:`map_location` is a dict, it will be used to remap location tags 2025-03-17T18:45:10.6403923Z appearing in the file (keys), to ones that specify where to put the 2025-03-17T18:45:10.6404363Z storages (values). 2025-03-17T18:45:10.6404527Z 2025-03-17T18:45:10.6404764Z User extensions can register their own location tags and tagging and 2025-03-17T18:45:10.6405384Z deserialization methods using :func:`torch.serialization.register_package`. 2025-03-17T18:45:10.6405771Z 2025-03-17T18:45:10.6405873Z Args: 2025-03-17T18:45:10.6406333Z f: a file-like object (has to implement :meth:`read`, :meth:`readline`, :meth:`tell`, and :meth:`seek`), 2025-03-17T18:45:10.6407038Z or a string or os.PathLike object containing a file name 2025-03-17T18:45:10.6407676Z map_location: a function, :class:`torch.device`, string or a dict specifying how to remap storage 2025-03-17T18:45:10.6408238Z locations 2025-03-17T18:45:10.6408644Z pickle_module: module used for unpickling metadata and objects (has to 2025-03-17T18:45:10.6409191Z match the :attr:`pickle_module` used to serialize file) 2025-03-17T18:45:10.6409713Z weights_only: Indicates whether unpickler should be restricted to 2025-03-17T18:45:10.6410217Z loading only tensors, primitive types, dictionaries 2025-03-17T18:45:10.6410731Z and any types added via :func:`torch.serialization.add_safe_globals`. 2025-03-17T18:45:10.6411223Z See :ref:`weights-only` for more details. 2025-03-17T18:45:10.6411826Z mmap: Indicates whether the file should be mmaped rather than loading all the storages into memory. 2025-03-17T18:45:10.6412640Z Typically, tensor storages in the file will first be moved from disk to CPU memory, after which they 2025-03-17T18:45:10.6413463Z are moved to the location that they were tagged with when saving, or specified by ``map_location``. This 2025-03-17T18:45:10.6414275Z second step is a no-op if the final location is CPU. When the ``mmap`` flag is set, instead of copying the 2025-03-17T18:45:10.6414986Z tensor storages from disk to CPU memory in the first step, ``f`` is mmaped. 2025-03-17T18:45:10.6415607Z pickle_load_args: (Python 3 only) optional keyword arguments passed over to 2025-03-17T18:45:10.6416213Z :func:`pickle_module.load` and :func:`pickle_module.Unpickler`, e.g., 2025-03-17T18:45:10.6416672Z :attr:`errors=...`. 2025-03-17T18:45:10.6416874Z 2025-03-17T18:45:10.6416985Z .. warning:: 2025-03-17T18:45:10.6417354Z :func:`torch.load()` unless `weights_only` parameter is set to `True`, 2025-03-17T18:45:10.6417907Z uses ``pickle`` module implicitly, which is known to be insecure. 2025-03-17T18:45:10.6418522Z It is possible to construct malicious pickle data which will execute arbitrary code 2025-03-17T18:45:10.6419170Z during unpickling. Never load data that could have come from an untrusted 2025-03-17T18:45:10.6419895Z source in an unsafe mode, or that could have been tampered with. **Only load data you trust**. 2025-03-17T18:45:10.6420326Z 2025-03-17T18:45:10.6420418Z .. note:: 2025-03-17T18:45:10.6420827Z When you call :func:`torch.load()` on a file which contains GPU tensors, those tensors 2025-03-17T18:45:10.6421486Z will be loaded to GPU by default. You can call ``torch.load(.., map_location='cpu')`` 2025-03-17T18:45:10.6422149Z and then :meth:`load_state_dict` to avoid GPU RAM surge when loading a model checkpoint. 2025-03-17T18:45:10.6422556Z 2025-03-17T18:45:10.6422649Z .. note:: 2025-03-17T18:45:10.6423044Z By default, we decode byte strings as ``utf-8``. This is to avoid a common error 2025-03-17T18:45:10.6423655Z case ``UnicodeDecodeError: 'ascii' codec can't decode byte 0x...`` 2025-03-17T18:45:10.6424211Z when loading files saved by Python 2 in Python 3. If this default 2025-03-17T18:45:10.6424813Z is incorrect, you may use an extra :attr:`encoding` keyword argument to specify how 2025-03-17T18:45:10.6425453Z these objects should be loaded, e.g., :attr:`encoding='latin1'` decodes them 2025-03-17T18:45:10.6426069Z to strings using ``latin1`` encoding, and :attr:`encoding='bytes'` keeps them 2025-03-17T18:45:10.6426770Z as byte arrays which can be decoded later with ``byte_array.decode(...)``. 2025-03-17T18:45:10.6427127Z 2025-03-17T18:45:10.6427235Z Example: 2025-03-17T18:45:10.6427518Z >>> # xdoctest: +SKIP("undefined filepaths") 2025-03-17T18:45:10.6427919Z >>> torch.load("tensors.pt", weights_only=True) 2025-03-17T18:45:10.6428299Z # Load all tensors onto the CPU 2025-03-17T18:45:10.6428681Z >>> torch.load( 2025-03-17T18:45:10.6428943Z ... "tensors.pt", 2025-03-17T18:45:10.6429265Z ... map_location=torch.device("cpu"), 2025-03-17T18:45:10.6429625Z ... weights_only=True, 2025-03-17T18:45:10.6429925Z ... ) 2025-03-17T18:45:10.6430216Z # Load all tensors onto the CPU, using a function 2025-03-17T18:45:10.6430586Z >>> torch.load( 2025-03-17T18:45:10.6430859Z ... "tensors.pt", 2025-03-17T18:45:10.6431197Z ... map_location=lambda storage, loc: storage, 2025-03-17T18:45:10.6431572Z ... weights_only=True, 2025-03-17T18:45:10.6431871Z ... ) 2025-03-17T18:45:10.6432125Z # Load all tensors onto GPU 1 2025-03-17T18:45:10.6432441Z >>> torch.load( 2025-03-17T18:45:10.6432711Z ... "tensors.pt", 2025-03-17T18:45:10.6433068Z ... map_location=lambda storage, loc: storage.cuda(1), 2025-03-17T18:45:10.6433461Z ... weights_only=True, 2025-03-17T18:45:10.6433789Z ... ) # type: ignore[attr-defined] 2025-03-17T18:45:10.6434139Z # Map tensors from GPU 1 to GPU 0 2025-03-17T18:45:10.6434456Z >>> torch.load( 2025-03-17T18:45:10.6434729Z ... "tensors.pt", 2025-03-17T18:45:10.6435040Z ... map_location={"cuda:1": "cuda:0"}, 2025-03-17T18:45:10.6435387Z ... weights_only=True, 2025-03-17T18:45:10.6435685Z ... ) 2025-03-17T18:45:10.6435940Z # Load tensor from io.BytesIO object 2025-03-17T18:45:10.6436437Z # Loading from a buffer setting weights_only=False, warning this can be unsafe 2025-03-17T18:45:10.6437128Z >>> with open("tensor.pt", "rb") as f: 2025-03-17T18:45:10.6437485Z ... buffer = io.BytesIO(f.read()) 2025-03-17T18:45:10.6437850Z >>> torch.load(buffer, weights_only=False) 2025-03-17T18:45:10.6438257Z # Load a module with 'ascii' encoding for unpickling 2025-03-17T18:45:10.6438780Z # Loading from a module setting weights_only=False, warning this can be unsafe 2025-03-17T18:45:10.6439366Z >>> torch.load("module.pt", encoding="ascii", weights_only=False) 2025-03-17T18:45:10.6439778Z 2025-03-17T18:45:10.6440159Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.6440536Z 2025-03-17T18:45:10.7450754Z msg = Cannot scrape callname=is_available in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py line=38. 2025-03-17T18:45:10.7451718Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:10.7452362Z Check if the current accelerator is available at runtime: it was build, all the 2025-03-17T18:45:10.7452973Z required drivers are available and at least one device is visible. 2025-03-17T18:45:10.7453474Z See :ref:`accelerator` for details. 2025-03-17T18:45:10.7453741Z 2025-03-17T18:45:10.7453848Z Returns: 2025-03-17T18:45:10.7454278Z bool: A boolean indicating if there is an available :ref:`accelerator`. 2025-03-17T18:45:10.7454693Z 2025-03-17T18:45:10.7454828Z Example:: 2025-03-17T18:45:10.7454966Z 2025-03-17T18:45:10.7455265Z >>> assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:10.7455761Z 2025-03-17T18:45:10.7456464Z Original Error: SyntaxError('invalid syntax', ('', 1, 41, 'assert torch.accelerator.is_available() "No available accelerators detected."\n', 1, 78)) 2025-03-17T18:45:10.7457149Z 2025-03-17T18:45:10.7457432Z assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:10.7457935Z ^ 2025-03-17T18:45:10.7467455Z msg = Cannot scrape callname=synchronize in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py line=153. 2025-03-17T18:45:10.7468409Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:10.7469172Z Wait for all kernels in all streams on the given device to complete. 2025-03-17T18:45:10.7469504Z 2025-03-17T18:45:10.7469610Z Args: 2025-03-17T18:45:10.7470074Z device (:class:`torch.device`, str, int, optional): device for which to synchronize. It must match 2025-03-17T18:45:10.7470770Z the current :ref:`accelerator` device type. If not given, 2025-03-17T18:45:10.7471349Z use :func:`torch.accelerator.current_device_index` by default. 2025-03-17T18:45:10.7471673Z 2025-03-17T18:45:10.7472066Z .. note:: This function is a no-op if the current :ref:`accelerator` is not initialized. 2025-03-17T18:45:10.7472534Z 2025-03-17T18:45:10.7472647Z Example:: 2025-03-17T18:45:10.7472801Z 2025-03-17T18:45:10.7473003Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:10.7473553Z >>> assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:10.7474167Z >>> start_event = torch.Event(enable_timing=True) 2025-03-17T18:45:10.7474644Z >>> end_event = torch.Event(enable_timing=True) 2025-03-17T18:45:10.7475025Z >>> start_event.record() 2025-03-17T18:45:10.7475542Z >>> tensor = torch.randn(100, device=torch.accelerator.current_accelerator()) 2025-03-17T18:45:10.7476087Z >>> sum = torch.sum(tensor) 2025-03-17T18:45:10.7476417Z >>> end_event.record() 2025-03-17T18:45:10.7476795Z >>> torch.accelerator.synchronize() 2025-03-17T18:45:10.7477207Z >>> elapsed_time_ms = start_event.elapsed_time(end_event) 2025-03-17T18:45:10.7477652Z 2025-03-17T18:45:10.7478401Z Original Error: SyntaxError('invalid syntax', ('', 2, 41, 'assert torch.accelerator.is_available() "No available accelerators detected."\n', 2, 78)) 2025-03-17T18:45:10.7479160Z 2025-03-17T18:45:10.7479429Z assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:10.7479989Z ^ 2025-03-17T18:45:10.7710569Z msg = Cannot scrape callname=cudart in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/__init__.py line=396. 2025-03-17T18:45:10.7711582Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:10.7712097Z Retrieves the CUDA runtime API module. 2025-03-17T18:45:10.7712325Z 2025-03-17T18:45:10.7712330Z 2025-03-17T18:45:10.7712763Z This function initializes the CUDA runtime environment if it is not already 2025-03-17T18:45:10.7713392Z initialized and returns the CUDA runtime API module (_cudart). The CUDA 2025-03-17T18:45:10.7713976Z runtime API module provides access to various CUDA runtime functions. 2025-03-17T18:45:10.7714338Z 2025-03-17T18:45:10.7714431Z Args: 2025-03-17T18:45:10.7714662Z ``None`` 2025-03-17T18:45:10.7714817Z 2025-03-17T18:45:10.7714908Z Returns: 2025-03-17T18:45:10.7715198Z module: The CUDA runtime API module (_cudart). 2025-03-17T18:45:10.7715471Z 2025-03-17T18:45:10.7715568Z Raises: 2025-03-17T18:45:10.7715943Z RuntimeError: If CUDA cannot be re-initialized in a forked subprocess. 2025-03-17T18:45:10.7716678Z AssertionError: If PyTorch is not compiled with CUDA support or if libcudart functions are unavailable. 2025-03-17T18:45:10.7717179Z 2025-03-17T18:45:10.7717319Z Example of CUDA operations with profiling: 2025-03-17T18:45:10.7717682Z >>> import torch 2025-03-17T18:45:10.7718003Z >>> from torch.cuda import cudart, check_error 2025-03-17T18:45:10.7718363Z >>> import os 2025-03-17T18:45:10.7718620Z >>> 2025-03-17T18:45:10.7718878Z >>> os.environ['CUDA_PROFILE'] = '1' 2025-03-17T18:45:10.7719207Z >>> 2025-03-17T18:45:10.7719485Z >>> def perform_cuda_operations_with_streams(): 2025-03-17T18:45:10.7719871Z >>> stream = torch.cuda.Stream() 2025-03-17T18:45:10.7720235Z >>> with torch.cuda.stream(stream): 2025-03-17T18:45:10.7720611Z >>> x = torch.randn(100, 100, device='cuda') 2025-03-17T18:45:10.7721091Z >>> y = torch.randn(100, 100, device='cuda') 2025-03-17T18:45:10.7721456Z >>> z = torch.mul(x, y) 2025-03-17T18:45:10.7721766Z >>> return z 2025-03-17T18:45:10.7722036Z >>> 2025-03-17T18:45:10.7722289Z >>> torch.cuda.synchronize() 2025-03-17T18:45:10.7722658Z >>> print("====== Start nsys profiling ======") 2025-03-17T18:45:10.7723059Z >>> check_error(cudart().cudaProfilerStart()) 2025-03-17T18:45:10.7723465Z >>> with torch.autograd.profiler.emit_nvtx(): 2025-03-17T18:45:10.7723895Z >>> result = perform_cuda_operations_with_streams() 2025-03-17T18:45:10.7724311Z >>> print("CUDA operations completed.") 2025-03-17T18:45:10.7724735Z >>> check_error(torch.cuda.cudart().cudaProfilerStop()) 2025-03-17T18:45:10.7725159Z >>> print("====== End nsys profiling ======") 2025-03-17T18:45:10.7725403Z 2025-03-17T18:45:10.7725619Z To run this example and save the profiling information, execute: 2025-03-17T18:45:10.7726328Z >>> $ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py 2025-03-17T18:45:10.7726823Z 2025-03-17T18:45:10.7727077Z This command profiles the CUDA operations in the provided script and saves 2025-03-17T18:45:10.7727660Z the profiling information to a file named `trace_name.prof`. 2025-03-17T18:45:10.7728240Z The `--profile-from-start off` option ensures that profiling starts only 2025-03-17T18:45:10.7728882Z after the `cudaProfilerStart` call in the script. 2025-03-17T18:45:10.7729388Z The `--csv` and `--print-summary` options format the profiling output as a 2025-03-17T18:45:10.7729883Z CSV file and print a summary, respectively. 2025-03-17T18:45:10.7730397Z The `-o` option specifies the output file name, and the `-f` option forces the 2025-03-17T18:45:10.7730942Z overwrite of the output file if it already exists. 2025-03-17T18:45:10.7731318Z 2025-03-17T18:45:10.7732109Z Original Error: SyntaxError('invalid syntax', ('', 1, 1, '$ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py\n', 1, 2)) 2025-03-17T18:45:10.7732886Z 2025-03-17T18:45:10.7733258Z $ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py 2025-03-17T18:45:10.7733909Z ^ 2025-03-17T18:45:10.7863523Z msg = Cannot scrape callname=Future.then in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py line=105. 2025-03-17T18:45:10.7864496Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.7864887Z 2025-03-17T18:45:10.7865137Z Append the given callback function to this ``Future``, which will be run 2025-03-17T18:45:10.7865712Z when the ``Future`` is completed. Multiple callbacks can be added to 2025-03-17T18:45:10.7866262Z the same ``Future``, but the order in which they will be executed cannot 2025-03-17T18:45:10.7866871Z be guaranteed (to enforce a certain order consider chaining: 2025-03-17T18:45:10.7867400Z ``fut.then(cb1).then(cb2)``). The callback must take one argument, which 2025-03-17T18:45:10.7867961Z is the reference to this ``Future``. The callback function can use the 2025-03-17T18:45:10.7868528Z :meth:`value` method to get the value. Note that if this ``Future`` is 2025-03-17T18:45:10.7869109Z already completed, the given callback will be run immediately inline. 2025-03-17T18:45:10.7869459Z 2025-03-17T18:45:10.7869673Z If the ``Future``'s value contains tensors that reside on GPUs, the 2025-03-17T18:45:10.7870229Z callback might be invoked while the async kernels that are populating 2025-03-17T18:45:10.7870803Z those tensors haven't yet finished executing on the device. However, the 2025-03-17T18:45:10.7871382Z callback will be invoked with some dedicated streams set as current 2025-03-17T18:45:10.7871938Z (fetched from a global pool) which will be synchronized with those 2025-03-17T18:45:10.7872653Z kernels. Hence any operation performed by the callback on these tensors 2025-03-17T18:45:10.7873294Z will be scheduled on the device after the kernels complete. In other 2025-03-17T18:45:10.7874013Z words, as long as the callback doesn't switch streams, it can safely 2025-03-17T18:45:10.7875031Z manipulate the result without any additional synchronization. This is 2025-03-17T18:45:10.7875620Z similar to the non-blocking behavior of :meth:`wait`. 2025-03-17T18:45:10.7875913Z 2025-03-17T18:45:10.7876135Z Similarly, if the callback returns a value that contains tensors that 2025-03-17T18:45:10.7876685Z reside on a GPU, it can do so even if the kernels that are producing 2025-03-17T18:45:10.7877240Z these tensors are still running on the device, as long as the callback 2025-03-17T18:45:10.7877808Z didn't change streams during its execution. If one wants to change 2025-03-17T18:45:10.7878366Z streams, one must be careful to re-synchronize them with the original 2025-03-17T18:45:10.7878944Z streams, that is, those that were current when the callback was invoked. 2025-03-17T18:45:10.7879301Z 2025-03-17T18:45:10.7879393Z Args: 2025-03-17T18:45:10.7879736Z callback(``Callable``): a ``Callable`` that takes this ``Future`` as 2025-03-17T18:45:10.7880188Z the only argument. 2025-03-17T18:45:10.7880413Z 2025-03-17T18:45:10.7880516Z Returns: 2025-03-17T18:45:10.7880822Z A new ``Future`` object that holds the return value of the 2025-03-17T18:45:10.7881298Z ``callback`` and will be marked as completed when the given 2025-03-17T18:45:10.7881710Z ``callback`` finishes. 2025-03-17T18:45:10.7881889Z 2025-03-17T18:45:10.7882093Z .. note:: Note that if the callback function throws, either 2025-03-17T18:45:10.7882608Z through the original future being completed with an exception and 2025-03-17T18:45:10.7883150Z calling ``fut.wait()``, or through other code in the callback, the 2025-03-17T18:45:10.7883689Z future returned by ``then`` will be marked appropriately with the 2025-03-17T18:45:10.7884236Z encountered error. However, if this callback later completes 2025-03-17T18:45:10.7884788Z additional futures, those futures are not marked as completed with 2025-03-17T18:45:10.7885347Z an error and the user is responsible for handling completion/waiting 2025-03-17T18:45:10.7885795Z on those futures independently. 2025-03-17T18:45:10.7886027Z 2025-03-17T18:45:10.7886278Z Example:: 2025-03-17T18:45:10.7886573Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_FUTURES) 2025-03-17T18:45:10.7886950Z >>> def callback(fut): 2025-03-17T18:45:10.7887285Z ... print(f"RPC return value is {fut.wait()}.") 2025-03-17T18:45:10.7887669Z >>> fut = torch.futures.Future() 2025-03-17T18:45:10.7888073Z >>> # The inserted callback will print the return value when 2025-03-17T18:45:10.7888504Z >>> # receiving the response from "worker1" 2025-03-17T18:45:10.7888864Z >>> cb_fut = fut.then(callback) 2025-03-17T18:45:10.7889188Z >>> chain_cb_fut = cb_fut.then( 2025-03-17T18:45:10.7889567Z ... lambda x : print(f"Chained cb done. {x.wait()}") 2025-03-17T18:45:10.7889938Z ... ) 2025-03-17T18:45:10.7890174Z >>> fut.set_result(5) 2025-03-17T18:45:10.7890465Z RPC return value is 5. 2025-03-17T18:45:10.7890758Z Chained cb done. None 2025-03-17T18:45:10.7890931Z 2025-03-17T18:45:10.7891210Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.7891591Z 2025-03-17T18:45:10.7892178Z msg = Cannot scrape callname=Future.set_result in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py line=213. 2025-03-17T18:45:10.7893114Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.7893503Z 2025-03-17T18:45:10.7893728Z Set the result for this ``Future``, which will mark this ``Future`` as 2025-03-17T18:45:10.7894293Z completed and trigger all attached callbacks. Note that a ``Future`` 2025-03-17T18:45:10.7894762Z cannot be marked completed twice. 2025-03-17T18:45:10.7895034Z 2025-03-17T18:45:10.7895277Z If the result contains tensors that reside on GPUs, this method can be 2025-03-17T18:45:10.7895840Z called even if the asynchronous kernels that are populating those 2025-03-17T18:45:10.7896399Z tensors haven't yet completed running on the device, provided that the 2025-03-17T18:45:10.7896991Z streams on which those kernels were enqueued are set as the current ones 2025-03-17T18:45:10.7897573Z when this method is called. Put simply, it's safe to call this method 2025-03-17T18:45:10.7898143Z immediately after launching those kernels, without any additional 2025-03-17T18:45:10.7898726Z synchronization, as long as one doesn't change streams in between. This 2025-03-17T18:45:10.7899311Z method will record events on all the relevant current streams and will 2025-03-17T18:45:10.7899859Z use them to ensure proper scheduling for all the consumers of this 2025-03-17T18:45:10.7900285Z ``Future``. 2025-03-17T18:45:10.7900427Z 2025-03-17T18:45:10.7900520Z Args: 2025-03-17T18:45:10.7900813Z result (object): the result object of this ``Future``. 2025-03-17T18:45:10.7901107Z 2025-03-17T18:45:10.7901203Z Example:: 2025-03-17T18:45:10.7901488Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_FUTURES) 2025-03-17T18:45:10.7901861Z >>> import threading 2025-03-17T18:45:10.7902137Z >>> import time 2025-03-17T18:45:10.7902416Z >>> def slow_set_future(fut, value): 2025-03-17T18:45:10.7902753Z ... time.sleep(0.5) 2025-03-17T18:45:10.7903050Z ... fut.set_result(value) 2025-03-17T18:45:10.7903374Z >>> fut = torch.futures.Future() 2025-03-17T18:45:10.7903706Z >>> t = threading.Thread( 2025-03-17T18:45:10.7904015Z ... target=slow_set_future, 2025-03-17T18:45:10.7904349Z ... args=(fut, torch.ones(2) * 3) 2025-03-17T18:45:10.7904669Z ... ) 2025-03-17T18:45:10.7904892Z >>> t.start() 2025-03-17T18:45:10.7905135Z >>> print(fut.wait()) 2025-03-17T18:45:10.7905413Z tensor([3., 3.]) 2025-03-17T18:45:10.7905673Z >>> t.join() 2025-03-17T18:45:10.7905829Z 2025-03-17T18:45:10.7906088Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.7906574Z 2025-03-17T18:45:10.8008675Z msg = Cannot scrape callname=compile_shader in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/mps/__init__.py line=144. 2025-03-17T18:45:10.8009731Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.8010361Z Compiles compute shader from source and allows one to invoke kernels 2025-03-17T18:45:10.8010893Z defined there from the comfort of Python runtime 2025-03-17T18:45:10.8011279Z Example:: 2025-03-17T18:45:10.8011433Z 2025-03-17T18:45:10.8011580Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_MPS) 2025-03-17T18:45:10.8011969Z >>> lib = torch.mps.compile_shader( 2025-03-17T18:45:10.8012607Z ... "kernel void full(device float* out, constant float& val, uint idx [[thread_position_in_grid]]) { out[idx] = val; }" 2025-03-17T18:45:10.8013227Z ... ) 2025-03-17T18:45:10.8013500Z >>> x = torch.zeros(16, device="mps") 2025-03-17T18:45:10.8013854Z >>> lib.full(x, 3.14) 2025-03-17T18:45:10.8014142Z 2025-03-17T18:45:10.8014536Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.8014920Z 2025-03-17T18:45:10.8235220Z msg = Cannot scrape callname=sum in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py line=202. 2025-03-17T18:45:10.8236117Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:10.8236685Z Return the sum of each row of the given sparse tensor. 2025-03-17T18:45:10.8237090Z 2025-03-17T18:45:10.8237338Z Returns the sum of each row of the sparse tensor :attr:`input` in the given 2025-03-17T18:45:10.8237904Z dimensions :attr:`dim`. If :attr:`dim` is a list of dimensions, 2025-03-17T18:45:10.8238476Z reduce over all of them. When sum over all ``sparse_dim``, this method 2025-03-17T18:45:10.8239120Z returns a dense tensor instead of a sparse tensor. 2025-03-17T18:45:10.8239391Z 2025-03-17T18:45:10.8239668Z All summed :attr:`dim` are squeezed (see :func:`torch.squeeze`), resulting an output 2025-03-17T18:45:10.8240267Z tensor having :attr:`dim` fewer dimensions than :attr:`input`. 2025-03-17T18:45:10.8240582Z 2025-03-17T18:45:10.8240830Z During backward, only gradients at ``nnz`` locations of :attr:`input` 2025-03-17T18:45:10.8241431Z will propagate back. Note that the gradients of :attr:`input` is coalesced. 2025-03-17T18:45:10.8241799Z 2025-03-17T18:45:10.8241902Z Args: 2025-03-17T18:45:10.8242167Z input (Tensor): the input sparse tensor 2025-03-17T18:45:10.8242703Z dim (int or tuple of ints): a dimension or a list of dimensions to reduce. Default: reduce 2025-03-17T18:45:10.8243212Z over all dims. 2025-03-17T18:45:10.8243658Z dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor. 2025-03-17T18:45:10.8244176Z Default: dtype of :attr:`input`. 2025-03-17T18:45:10.8244409Z 2025-03-17T18:45:10.8244530Z Example:: 2025-03-17T18:45:10.8244668Z 2025-03-17T18:45:10.8244775Z >>> nnz = 3 2025-03-17T18:45:10.8245035Z >>> dims = [5, 5, 2, 3] 2025-03-17T18:45:10.8245386Z >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)), 2025-03-17T18:45:10.8245866Z torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz) 2025-03-17T18:45:10.8246317Z >>> V = torch.randn(nnz, dims[2], dims[3]) 2025-03-17T18:45:10.8246679Z >>> size = torch.Size(dims) 2025-03-17T18:45:10.8247050Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:10.8247450Z >>> S = torch.sparse_coo_tensor(I, V, size) 2025-03-17T18:45:10.8247792Z >>> S 2025-03-17T18:45:10.8248049Z tensor(indices=tensor([[2, 0, 3], 2025-03-17T18:45:10.8248383Z [2, 4, 1]]), 2025-03-17T18:45:10.8248744Z values=tensor([[[-0.6438, -1.6467, 1.4004], 2025-03-17T18:45:10.8249125Z [ 0.3411, 0.0918, -0.2312]], 2025-03-17T18:45:10.8249372Z 2025-03-17T18:45:10.8249488Z [[ 0.5348, 0.0634, -2.0494], 2025-03-17T18:45:10.8249844Z [-0.7125, -1.0646, 2.1844]], 2025-03-17T18:45:10.8250167Z 2025-03-17T18:45:10.8250303Z [[ 0.1276, 0.1874, -0.6334], 2025-03-17T18:45:10.8250663Z [-1.9682, -0.5340, 0.7483]]]), 2025-03-17T18:45:10.8251061Z size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo) 2025-03-17T18:45:10.8251329Z 2025-03-17T18:45:10.8251543Z # when sum over only part of sparse_dims, return a sparse tensor 2025-03-17T18:45:10.8251994Z >>> torch.sparse.sum(S, [1, 3]) 2025-03-17T18:45:10.8252349Z tensor(indices=tensor([[0, 2, 3]]), 2025-03-17T18:45:10.8252700Z values=tensor([[-1.4512, 0.4073], 2025-03-17T18:45:10.8253047Z [-0.8901, 0.2017], 2025-03-17T18:45:10.8253385Z [-0.3183, -1.7539]]), 2025-03-17T18:45:10.8253756Z size=(5, 2), nnz=3, layout=torch.sparse_coo) 2025-03-17T18:45:10.8254010Z 2025-03-17T18:45:10.8254182Z # when sum over all sparse dim, return a dense tensor 2025-03-17T18:45:10.8254578Z # with summed dims squeezed 2025-03-17T18:45:10.8254901Z >>> torch.sparse.sum(S, [0, 1, 3]) 2025-03-17T18:45:10.8255238Z tensor([-2.6596, -1.1450]) 2025-03-17T18:45:10.8255531Z 2025-03-17T18:45:10.8255918Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:10.8256309Z 2025-03-17T18:45:11.4008332Z msg = Cannot scrape callname=vmap in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/apis.py line=39. 2025-03-17T18:45:11.4009361Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:11.4010028Z 2025-03-17T18:45:11.4010272Z vmap is the vectorizing map; ``vmap(func)`` returns a new function that 2025-03-17T18:45:11.4010829Z maps ``func`` over some dimension of the inputs. Semantically, vmap 2025-03-17T18:45:11.4011394Z pushes the map into PyTorch operations called by ``func``, effectively 2025-03-17T18:45:11.4011867Z vectorizing those operations. 2025-03-17T18:45:11.4012081Z 2025-03-17T18:45:11.4012316Z vmap is useful for handling batch dimensions: one can write a function 2025-03-17T18:45:11.4012870Z ``func`` that runs on examples and then lift it to a function that can 2025-03-17T18:45:11.4013422Z take batches of examples with ``vmap(func)``. vmap can also be used to 2025-03-17T18:45:11.4013943Z compute batched gradients when composed with autograd. 2025-03-17T18:45:11.4014231Z 2025-03-17T18:45:11.4014370Z .. note:: 2025-03-17T18:45:11.4014686Z :func:`torch.vmap` is aliased to :func:`torch.func.vmap` for 2025-03-17T18:45:11.4015137Z convenience. Use whichever one you'd like. 2025-03-17T18:45:11.4015402Z 2025-03-17T18:45:11.4015494Z Args: 2025-03-17T18:45:11.4015845Z func (function): A Python function that takes one or more arguments. 2025-03-17T18:45:11.4016407Z Must return one or more Tensors. 2025-03-17T18:45:11.4016851Z in_dims (int or nested structure): Specifies which dimension of the 2025-03-17T18:45:11.4017376Z inputs should be mapped over. ``in_dims`` should have a 2025-03-17T18:45:11.4017880Z structure like the inputs. If the ``in_dim`` for a particular 2025-03-17T18:45:11.4018394Z input is None, then that indicates there is no map dimension. 2025-03-17T18:45:11.4018814Z Default: 0. 2025-03-17T18:45:11.4019179Z out_dims (int or Tuple[int]): Specifies where the mapped dimension 2025-03-17T18:45:11.4019709Z should appear in the outputs. If ``out_dims`` is a Tuple, then 2025-03-17T18:45:11.4020198Z it should have one element per output. Default: 0. 2025-03-17T18:45:11.4020678Z randomness (str): Specifies whether the randomness in this 2025-03-17T18:45:11.4021214Z vmap should be the same or different across batches. If 'different', 2025-03-17T18:45:11.4021772Z the randomness for each batch will be different. If 'same', the 2025-03-17T18:45:11.4022330Z randomness will be the same across batches. If 'error', any calls to 2025-03-17T18:45:11.4023027Z random functions will error. Default: 'error'. WARNING: this flag 2025-03-17T18:45:11.4023595Z only applies to random PyTorch operations and does not apply to 2025-03-17T18:45:11.4024078Z Python's random module or numpy randomness. 2025-03-17T18:45:11.4024579Z chunk_size (None or int): If None (default), apply a single vmap over inputs. 2025-03-17T18:45:11.4025173Z If not None, then compute the vmap :attr:`chunk_size` samples at a time. 2025-03-17T18:45:11.4025798Z Note that :attr:`chunk_size=1` is equivalent to computing the vmap with a for-loop. 2025-03-17T18:45:11.4026535Z If you run into memory issues computing the vmap, please try a non-None chunk_size. 2025-03-17T18:45:11.4026935Z 2025-03-17T18:45:11.4027044Z Returns: 2025-03-17T18:45:11.4027377Z Returns a new "batched" function. It takes the same inputs as 2025-03-17T18:45:11.4027875Z ``func``, except each input has an extra dimension at the index 2025-03-17T18:45:11.4028398Z specified by ``in_dims``. It takes returns the same outputs as 2025-03-17T18:45:11.4028909Z ``func``, except each output has an extra dimension at the index 2025-03-17T18:45:11.4029341Z specified by ``out_dims``. 2025-03-17T18:45:11.4029547Z 2025-03-17T18:45:11.4029640Z .. warning: 2025-03-17T18:45:11.4029982Z :func:`vmap` works best with functional-style code. Please do not 2025-03-17T18:45:11.4030508Z perform any side-effects in ``func``, with the exception of 2025-03-17T18:45:11.4031068Z in-place PyTorch operations. Examples of side-effects include mutating 2025-03-17T18:45:11.4031747Z Python data structures and assigning values to variables not captured 2025-03-17T18:45:11.4032200Z in ``func``. 2025-03-17T18:45:11.4032358Z 2025-03-17T18:45:11.4032600Z One example of using :func:`vmap` is to compute batched dot products. PyTorch 2025-03-17T18:45:11.4033196Z doesn't provide a batched ``torch.dot`` API; instead of unsuccessfully 2025-03-17T18:45:11.4033779Z rummaging through docs, use :func:`vmap` to construct a new function. 2025-03-17T18:45:11.4034138Z 2025-03-17T18:45:11.4034304Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:11.4034783Z >>> batched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N] 2025-03-17T18:45:11.4035244Z >>> x, y = torch.randn(2, 5), torch.randn(2, 5) 2025-03-17T18:45:11.4035600Z >>> batched_dot(x, y) 2025-03-17T18:45:11.4035774Z 2025-03-17T18:45:11.4036023Z :func:`vmap` can be helpful in hiding batch dimensions, leading to a simpler 2025-03-17T18:45:11.4036499Z model authoring experience. 2025-03-17T18:45:11.4036693Z 2025-03-17T18:45:11.4037013Z >>> batch_size, feature_size = 3, 5 2025-03-17T18:45:11.4037428Z >>> weights = torch.randn(feature_size, requires_grad=True) 2025-03-17T18:45:11.4037823Z >>> 2025-03-17T18:45:11.4038061Z >>> def model(feature_vec): 2025-03-17T18:45:11.4038403Z >>> # Very simple linear model with activation 2025-03-17T18:45:11.4038803Z >>> return feature_vec.dot(weights).relu() 2025-03-17T18:45:11.4039137Z >>> 2025-03-17T18:45:11.4039426Z >>> examples = torch.randn(batch_size, feature_size) 2025-03-17T18:45:11.4039839Z >>> result = torch.vmap(model)(examples) 2025-03-17T18:45:11.4040088Z 2025-03-17T18:45:11.4040348Z :func:`vmap` can also help vectorize computations that were previously difficult 2025-03-17T18:45:11.4040977Z or impossible to batch. One example is higher-order gradient computation. 2025-03-17T18:45:11.4041582Z The PyTorch autograd engine computes vjps (vector-Jacobian products). 2025-03-17T18:45:11.4042179Z Computing a full Jacobian matrix for some function f: R^N -> R^N usually 2025-03-17T18:45:11.4042803Z requires N calls to ``autograd.grad``, one per Jacobian row. Using :func:`vmap`, 2025-03-17T18:45:11.4043427Z we can vectorize the whole computation, computing the Jacobian in a single 2025-03-17T18:45:11.4043908Z call to ``autograd.grad``. 2025-03-17T18:45:11.4044101Z 2025-03-17T18:45:11.4044199Z >>> # Setup 2025-03-17T18:45:11.4044578Z >>> N = 5 2025-03-17T18:45:11.4044832Z >>> f = lambda x: x ** 2 2025-03-17T18:45:11.4045159Z >>> x = torch.randn(N, requires_grad=True) 2025-03-17T18:45:11.4045503Z >>> y = f(x) 2025-03-17T18:45:11.4045756Z >>> I_N = torch.eye(N) 2025-03-17T18:45:11.4046035Z >>> 2025-03-17T18:45:11.4046283Z >>> # Sequential approach 2025-03-17T18:45:11.4046709Z >>> jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0] 2025-03-17T18:45:11.4047173Z >>> for v in I_N.unbind()] 2025-03-17T18:45:11.4047525Z >>> jacobian = torch.stack(jacobian_rows) 2025-03-17T18:45:11.4047866Z >>> 2025-03-17T18:45:11.4048114Z >>> # vectorized gradient computation 2025-03-17T18:45:11.4048450Z >>> def get_vjp(v): 2025-03-17T18:45:11.4048749Z >>> return torch.autograd.grad(y, x, v) 2025-03-17T18:45:11.4049119Z >>> jacobian = torch.vmap(get_vjp)(I_N) 2025-03-17T18:45:11.4049362Z 2025-03-17T18:45:11.4049636Z :func:`vmap` can also be nested, producing an output with multiple batched dimensions 2025-03-17T18:45:11.4050040Z 2025-03-17T18:45:11.4050187Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:11.4050733Z >>> batched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0] 2025-03-17T18:45:11.4051281Z >>> x, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5) 2025-03-17T18:45:11.4051674Z >>> batched_dot(x, y) # tensor of size [2, 3] 2025-03-17T18:45:11.4051928Z 2025-03-17T18:45:11.4052173Z If the inputs are not batched along the first dimension, ``in_dims`` specifies 2025-03-17T18:45:11.4052790Z the dimension that each inputs are batched along as 2025-03-17T18:45:11.4053075Z 2025-03-17T18:45:11.4053221Z >>> torch.dot # [N], [N] -> [] 2025-03-17T18:45:11.4053716Z >>> batched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D] 2025-03-17T18:45:11.4054200Z >>> x, y = torch.randn(2, 5), torch.randn(2, 5) 2025-03-17T18:45:11.4054707Z >>> batched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension 2025-03-17T18:45:11.4055072Z 2025-03-17T18:45:11.4055353Z If there are multiple inputs each of which is batched along different dimensions, 2025-03-17T18:45:11.4055971Z ``in_dims`` must be a tuple with the batch dimension for each input as 2025-03-17T18:45:11.4056296Z 2025-03-17T18:45:11.4056454Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:11.4056967Z >>> batched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N] 2025-03-17T18:45:11.4057465Z >>> x, y = torch.randn(2, 5), torch.randn(5) 2025-03-17T18:45:11.4057972Z >>> batched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None 2025-03-17T18:45:11.4058344Z 2025-03-17T18:45:11.4058604Z If the input is a Python struct, ``in_dims`` must be a tuple containing a struct 2025-03-17T18:45:11.4059100Z matching the shape of the input: 2025-03-17T18:45:11.4059308Z 2025-03-17T18:45:11.4059475Z >>> f = lambda dict: torch.dot(dict['x'], dict['y']) 2025-03-17T18:45:11.4059871Z >>> x, y = torch.randn(2, 5), torch.randn(5) 2025-03-17T18:45:11.4060214Z >>> input = {'x': x, 'y': y} 2025-03-17T18:45:11.4060604Z >>> batched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},)) 2025-03-17T18:45:11.4061028Z >>> batched_dot(input) 2025-03-17T18:45:11.4061224Z 2025-03-17T18:45:11.4061506Z By default, the output is batched along the first dimension. However, it can be batched 2025-03-17T18:45:11.4062112Z along any dimension by using ``out_dims`` 2025-03-17T18:45:11.4062367Z 2025-03-17T18:45:11.4062478Z >>> f = lambda x: x ** 2 2025-03-17T18:45:11.4062772Z >>> x = torch.randn(2, 5) 2025-03-17T18:45:11.4063103Z >>> batched_pow = torch.vmap(f, out_dims=1) 2025-03-17T18:45:11.4063462Z >>> batched_pow(x) # [5, 2] 2025-03-17T18:45:11.4063665Z 2025-03-17T18:45:11.4063961Z For any function that uses kwargs, the returned function will not batch the kwargs but will 2025-03-17T18:45:11.4064615Z accept kwargs 2025-03-17T18:45:11.4064773Z 2025-03-17T18:45:11.4064880Z >>> x = torch.randn([2, 5]) 2025-03-17T18:45:11.4065185Z >>> def fn(x, scale=4.): 2025-03-17T18:45:11.4065477Z >>> return x * scale 2025-03-17T18:45:11.4065750Z >>> 2025-03-17T18:45:11.4065992Z >>> batched_pow = torch.vmap(fn) 2025-03-17T18:45:11.4066359Z >>> assert torch.allclose(batched_pow(x), x * 4) 2025-03-17T18:45:11.4066958Z >>> batched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5] 2025-03-17T18:45:11.4067310Z 2025-03-17T18:45:11.4067528Z .. note:: 2025-03-17T18:45:11.4067902Z vmap does not provide general autobatching or handle variable-length 2025-03-17T18:45:11.4068372Z sequences out of the box. 2025-03-17T18:45:11.4068567Z 2025-03-17T18:45:11.4068840Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:11.4069219Z 2025-03-17T18:45:12.8571972Z msg = Cannot scrape callname=triton_op in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/triton.py line=21. 2025-03-17T18:45:12.8572915Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.8573581Z Create a custom operator whose implementation is backed by 1+ triton kernels. 2025-03-17T18:45:12.8573977Z 2025-03-17T18:45:12.8574197Z This is a more structured way of using triton kernels with PyTorch. 2025-03-17T18:45:12.8574806Z Prefer using triton kernels with no ``torch.library`` custom operator wrappers 2025-03-17T18:45:12.8575457Z (like :func:`torch.library.custom_op`, :func:`torch.library.triton_op`) because 2025-03-17T18:45:12.8576196Z that is simpler; 2025-03-17T18:45:12.8576632Z only use :func:`torch.library.custom_op`/:func:`torch.library.triton_op` if you 2025-03-17T18:45:12.8577399Z want to create an operator that behaves like PyTorch built-in operators. 2025-03-17T18:45:12.8578286Z For example, you may use a ``torch.library`` wrapper API to define the 2025-03-17T18:45:12.8579180Z behavior of the triton kernel when passed a tensor subclass or under 2025-03-17T18:45:12.8579860Z a TorchDispatchMode. 2025-03-17T18:45:12.8580053Z 2025-03-17T18:45:12.8580315Z Use :func:`torch.library.triton_op` instead of :func:`torch.library.custom_op` 2025-03-17T18:45:12.8580818Z when the implementation 2025-03-17T18:45:12.8581240Z consists of 1+ triton kernels. :func:`torch.library.custom_op` treats 2025-03-17T18:45:12.8581765Z custom operators as opaque (:func:`torch.compile` and 2025-03-17T18:45:12.8582299Z :func:`torch.export.export` will never trace into them), but ``triton_op`` 2025-03-17T18:45:12.8582903Z makes the implementation visible to these subsystems, allowing them 2025-03-17T18:45:12.8583378Z to optimize the triton kernel(s). 2025-03-17T18:45:12.8583597Z 2025-03-17T18:45:12.8583806Z Note that ``fn`` must only consist of calls to PyTorch-understood 2025-03-17T18:45:12.8584368Z operators and triton kernels. Any triton kernels called inside ``fn`` 2025-03-17T18:45:12.8584930Z must be wrapped in a call to :func:`torch.library.wrap_triton`. 2025-03-17T18:45:12.8585244Z 2025-03-17T18:45:12.8585348Z Args: 2025-03-17T18:45:12.8585736Z name (str): A name for the custom op that looks like "{namespace}::{name}", 2025-03-17T18:45:12.8586318Z e.g. "mylib::my_linear". The name is used as the op's stable identifier 2025-03-17T18:45:12.8586905Z in PyTorch subsystems (e.g. torch.export, FX graphs). 2025-03-17T18:45:12.8587451Z To avoid name collisions, please use your project name as the namespace; 2025-03-17T18:45:12.8588050Z e.g. all custom ops in pytorch/fbgemm use "fbgemm" as the namespace. 2025-03-17T18:45:12.8588682Z mutates_args (Iterable[str] or "unknown"): The names of args that the function mutates. 2025-03-17T18:45:12.8589337Z This MUST be accurate, otherwise, the behavior is undefined. If "unknown", 2025-03-17T18:45:12.8590144Z it pessimistically assumes that all inputs to the operator are being mutated. 2025-03-17T18:45:12.8590748Z schema (None | str): A schema string for the operator. If None 2025-03-17T18:45:12.8591267Z (recommended) we'll infer a schema for the operator from its type 2025-03-17T18:45:12.8591819Z annotations. We recommend letting us infer a schema unless you 2025-03-17T18:45:12.8592281Z have a specific reason not to. 2025-03-17T18:45:12.8592681Z Example: "(Tensor x, int y) -> (Tensor, Tensor)". 2025-03-17T18:45:12.8592963Z 2025-03-17T18:45:12.8593063Z Example:: 2025-03-17T18:45:12.8593216Z 2025-03-17T18:45:12.8593360Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:12.8593726Z >>> import torch 2025-03-17T18:45:12.8594065Z >>> from torch.library import triton_op, wrap_triton 2025-03-17T18:45:12.8594441Z >>> 2025-03-17T18:45:12.8594681Z >>> import triton 2025-03-17T18:45:12.8594992Z >>> from triton import language as tl 2025-03-17T18:45:12.8595330Z >>> 2025-03-17T18:45:12.8595562Z >>> @triton.jit 2025-03-17T18:45:12.8595834Z >>> def add_kernel( 2025-03-17T18:45:12.8596115Z >>> in_ptr0, 2025-03-17T18:45:12.8596384Z >>> in_ptr1, 2025-03-17T18:45:12.8596648Z >>> out_ptr, 2025-03-17T18:45:12.8596917Z >>> n_elements, 2025-03-17T18:45:12.8597204Z >>> BLOCK_SIZE: "tl.constexpr", 2025-03-17T18:45:12.8597535Z >>> ): 2025-03-17T18:45:12.8597798Z >>> pid = tl.program_id(axis=0) 2025-03-17T18:45:12.8598153Z >>> block_start = pid * BLOCK_SIZE 2025-03-17T18:45:12.8598653Z >>> offsets = block_start + tl.arange(0, BLOCK_SIZE) 2025-03-17T18:45:12.8599050Z >>> mask = offsets < n_elements 2025-03-17T18:45:12.8599421Z >>> x = tl.load(in_ptr0 + offsets, mask=mask) 2025-03-17T18:45:12.8599815Z >>> y = tl.load(in_ptr1 + offsets, mask=mask) 2025-03-17T18:45:12.8600182Z >>> output = x + y 2025-03-17T18:45:12.8600531Z >>> tl.store(out_ptr + offsets, output, mask=mask) 2025-03-17T18:45:12.8600894Z >>> 2025-03-17T18:45:12.8601164Z >>> @triton_op("mylib::add", mutates_args={}) 2025-03-17T18:45:12.8601612Z >>> def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: 2025-03-17T18:45:12.8602045Z >>> output = torch.empty_like(x) 2025-03-17T18:45:12.8602398Z >>> n_elements = output.numel() 2025-03-17T18:45:12.8602724Z >>> 2025-03-17T18:45:12.8602963Z >>> def grid(meta): 2025-03-17T18:45:12.8603342Z >>> return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) 2025-03-17T18:45:12.8603734Z >>> 2025-03-17T18:45:12.8604045Z >>> # NB: we need to wrap the triton kernel in a call to wrap_triton 2025-03-17T18:45:12.8604566Z >>> wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16) 2025-03-17T18:45:12.8604996Z >>> return output 2025-03-17T18:45:12.8605288Z >>> 2025-03-17T18:45:12.8605524Z >>> @torch.compile 2025-03-17T18:45:12.8605805Z >>> def f(x, y): 2025-03-17T18:45:12.8606140Z >>> return add(x, y) 2025-03-17T18:45:12.8621740Z >>> 2025-03-17T18:45:12.8622044Z >>> x = torch.randn(3, device="cuda") 2025-03-17T18:45:12.8622433Z >>> y = torch.randn(3, device="cuda") 2025-03-17T18:45:12.8622767Z >>> 2025-03-17T18:45:12.8623000Z >>> z = f(x, y) 2025-03-17T18:45:12.8623296Z >>> assert torch.allclose(z, x + y) 2025-03-17T18:45:12.8623528Z 2025-03-17T18:45:12.8623652Z 2025-03-17T18:45:12.8624034Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.8624435Z 2025-03-17T18:45:12.8625105Z msg = Cannot scrape callname=wrap_triton in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/triton.py line=202. 2025-03-17T18:45:12.8626135Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.8626788Z Allows capture of a triton kernel into a graph via make_fx or 2025-03-17T18:45:12.8627229Z non-strict ``torch.export``. 2025-03-17T18:45:12.8627436Z 2025-03-17T18:45:12.8627640Z These technologies perform Dispatcher-based tracing (via 2025-03-17T18:45:12.8628165Z ``__torch_dispatch__``) and cannot see calls to raw triton kernels. 2025-03-17T18:45:12.8628708Z The ``wrap_triton`` API wraps a triton kernel into a callable that 2025-03-17T18:45:12.8629165Z can actually be traced into a graph. 2025-03-17T18:45:12.8629395Z 2025-03-17T18:45:12.8629636Z Please use this API together with :func:`torch.library.triton_op`. 2025-03-17T18:45:12.8629969Z 2025-03-17T18:45:12.8630079Z Examples: 2025-03-17T18:45:12.8630217Z 2025-03-17T18:45:12.8630323Z >>> # xdoctest: +SKIP 2025-03-17T18:45:12.8630618Z >>> import torch 2025-03-17T18:45:12.8630893Z >>> import triton 2025-03-17T18:45:12.8631200Z >>> from triton import language as tl 2025-03-17T18:45:12.8631630Z >>> from torch.fx.experimental.proxy_tensor import make_fx 2025-03-17T18:45:12.8632072Z >>> from torch.library import wrap_triton 2025-03-17T18:45:12.8632415Z >>> 2025-03-17T18:45:12.8632647Z >>> @triton.jit 2025-03-17T18:45:12.8632915Z >>> def add_kernel( 2025-03-17T18:45:12.8633195Z >>> in_ptr0, 2025-03-17T18:45:12.8633466Z >>> in_ptr1, 2025-03-17T18:45:12.8633732Z >>> out_ptr, 2025-03-17T18:45:12.8634003Z >>> n_elements, 2025-03-17T18:45:12.8634306Z >>> BLOCK_SIZE: "tl.constexpr", 2025-03-17T18:45:12.8634707Z >>> ): 2025-03-17T18:45:12.8634965Z >>> pid = tl.program_id(axis=0) 2025-03-17T18:45:12.8635314Z >>> block_start = pid * BLOCK_SIZE 2025-03-17T18:45:12.8635707Z >>> offsets = block_start + tl.arange(0, BLOCK_SIZE) 2025-03-17T18:45:12.8636102Z >>> mask = offsets < n_elements 2025-03-17T18:45:12.8636472Z >>> x = tl.load(in_ptr0 + offsets, mask=mask) 2025-03-17T18:45:12.8637057Z >>> y = tl.load(in_ptr1 + offsets, mask=mask) 2025-03-17T18:45:12.8637424Z >>> output = x + y 2025-03-17T18:45:12.8637771Z >>> tl.store(out_ptr + offsets, output, mask=mask) 2025-03-17T18:45:12.8638141Z >>> 2025-03-17T18:45:12.8638380Z >>> def add(x, y): 2025-03-17T18:45:12.8638681Z >>> output = torch.empty_like(x) 2025-03-17T18:45:12.8639040Z >>> n_elements = output.numel() 2025-03-17T18:45:12.8639370Z >>> 2025-03-17T18:45:12.8639608Z >>> def grid_fn(meta): 2025-03-17T18:45:12.8640000Z >>> return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) 2025-03-17T18:45:12.8640393Z >>> 2025-03-17T18:45:12.8640729Z >>> wrap_triton(add_kernel)[grid_fn](x, y, output, n_elements, 16) 2025-03-17T18:45:12.8641161Z >>> return output 2025-03-17T18:45:12.8641431Z >>> 2025-03-17T18:45:12.8641689Z >>> x = torch.randn(3, device="cuda") 2025-03-17T18:45:12.8642049Z >>> y = torch.randn(3, device="cuda") 2025-03-17T18:45:12.8642397Z >>> gm = make_fx(add)(x, y) 2025-03-17T18:45:12.8642717Z >>> print(gm.code) 2025-03-17T18:45:12.8643015Z >>> # def forward(self, x_1, y_1): 2025-03-17T18:45:12.8643489Z >>> # empty_like = torch.ops.aten.empty_like.default(x_1, pin_memory = False) 2025-03-17T18:45:12.8644108Z >>> # triton_kernel_wrapper_mutation_proxy = triton_kernel_wrapper_mutation( 2025-03-17T18:45:12.8644620Z >>> # kernel_idx = 0, constant_args_idx = 0, 2025-03-17T18:45:12.8645000Z >>> # grid = [(1, 1, 1)], kwargs = { 2025-03-17T18:45:12.8645397Z >>> # 'in_ptr0': x_1, 'in_ptr1': y_1, 'out_ptr': empty_like, 2025-03-17T18:45:12.8645814Z >>> # 'n_elements': 3, 'BLOCK_SIZE': 16 2025-03-17T18:45:12.8646159Z >>> # }) 2025-03-17T18:45:12.8646573Z >>> # return empty_like 2025-03-17T18:45:12.8646785Z 2025-03-17T18:45:12.8646889Z 2025-03-17T18:45:12.8647278Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.8647658Z 2025-03-17T18:45:12.9413214Z msg = Cannot scrape callname=assert_almost_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=326. 2025-03-17T18:45:12.9414251Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.9414758Z 2025-03-17T18:45:12.9415027Z Raises an AssertionError if two items are not equal up to desired 2025-03-17T18:45:12.9415738Z precision. 2025-03-17T18:45:12.9416009Z 2025-03-17T18:45:12.9416369Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:12.9416900Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:12.9417414Z instead of this function for more consistent floating point 2025-03-17T18:45:12.9417842Z comparisons. 2025-03-17T18:45:12.9418005Z 2025-03-17T18:45:12.9418237Z The test verifies that the elements of `actual` and `desired` satisfy. 2025-03-17T18:45:12.9418576Z 2025-03-17T18:45:12.9418754Z ``abs(desired-actual) < float64(1.5 * 10**(-decimal))`` 2025-03-17T18:45:12.9419032Z 2025-03-17T18:45:12.9419369Z That is a looser test than originally documented, but agrees with what the 2025-03-17T18:45:12.9420288Z actual implementation in `assert_array_almost_equal` did up to rounding 2025-03-17T18:45:12.9421363Z vagaries. An exception is raised at conflicting values. For ndarrays this 2025-03-17T18:45:12.9422019Z delegates to assert_array_almost_equal 2025-03-17T18:45:12.9422246Z 2025-03-17T18:45:12.9422355Z Parameters 2025-03-17T18:45:12.9422588Z ---------- 2025-03-17T18:45:12.9422838Z actual : array_like 2025-03-17T18:45:12.9423107Z The object to check. 2025-03-17T18:45:12.9423381Z desired : array_like 2025-03-17T18:45:12.9423675Z The expected object. 2025-03-17T18:45:12.9424103Z decimal : int, optional 2025-03-17T18:45:12.9424406Z Desired precision, default is 7. 2025-03-17T18:45:12.9424783Z err_msg : str, optional 2025-03-17T18:45:12.9425114Z The error message to be printed in case of failure. 2025-03-17T18:45:12.9425600Z verbose : bool, optional 2025-03-17T18:45:12.9426336Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:12.9426767Z 2025-03-17T18:45:12.9426859Z Raises 2025-03-17T18:45:12.9427085Z ------ 2025-03-17T18:45:12.9427321Z AssertionError 2025-03-17T18:45:12.9427677Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:12.9427999Z 2025-03-17T18:45:12.9428104Z See Also 2025-03-17T18:45:12.9428326Z -------- 2025-03-17T18:45:12.9428703Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:12.9429205Z relative and/or absolute precision. 2025-03-17T18:45:12.9429672Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:12.9429999Z 2025-03-17T18:45:12.9430107Z Examples 2025-03-17T18:45:12.9430316Z -------- 2025-03-17T18:45:12.9430615Z >>> from torch._numpy.testing import assert_almost_equal 2025-03-17T18:45:12.9431040Z >>> assert_almost_equal(2.3333333333333, 2.33333334) 2025-03-17T18:45:12.9431476Z >>> assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) 2025-03-17T18:45:12.9431893Z Traceback (most recent call last): 2025-03-17T18:45:12.9432204Z ... 2025-03-17T18:45:12.9432436Z AssertionError: 2025-03-17T18:45:12.9432717Z Arrays are not almost equal to 10 decimals 2025-03-17T18:45:12.9433056Z ACTUAL: 2.3333333333333 2025-03-17T18:45:12.9433336Z DESIRED: 2.33333334 2025-03-17T18:45:12.9433491Z 2025-03-17T18:45:12.9433655Z >>> assert_almost_equal(np.array([1.0,2.3333333333333]), 2025-03-17T18:45:12.9434057Z ... np.array([1.0,2.33333334]), decimal=9) 2025-03-17T18:45:12.9434425Z Traceback (most recent call last): 2025-03-17T18:45:12.9434734Z ... 2025-03-17T18:45:12.9434958Z AssertionError: 2025-03-17T18:45:12.9435362Z Arrays are not almost equal to 9 decimals 2025-03-17T18:45:12.9435708Z 2025-03-17T18:45:12.9435948Z Mismatched elements: 1 / 2 (50%) 2025-03-17T18:45:12.9436290Z Max absolute difference: 6.666699636781459e-09 2025-03-17T18:45:12.9436677Z Max relative difference: 2.8571569790287484e-09 2025-03-17T18:45:12.9437302Z x: torch.ndarray([1.0000, 2.3333], dtype=float64) 2025-03-17T18:45:12.9437704Z y: torch.ndarray([1.0000, 2.3333], dtype=float64) 2025-03-17T18:45:12.9437974Z 2025-03-17T18:45:12.9437978Z 2025-03-17T18:45:12.9438238Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.9438637Z 2025-03-17T18:45:12.9439280Z msg = Cannot scrape callname=assert_approx_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=451. 2025-03-17T18:45:12.9440244Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.9440634Z 2025-03-17T18:45:12.9440879Z Raises an AssertionError if two items are not equal up to significant 2025-03-17T18:45:12.9441322Z digits. 2025-03-17T18:45:12.9441445Z 2025-03-17T18:45:12.9441644Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:12.9442125Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:12.9442633Z instead of this function for more consistent floating point 2025-03-17T18:45:12.9443049Z comparisons. 2025-03-17T18:45:12.9443209Z 2025-03-17T18:45:12.9443411Z Given two numbers, check that they are approximately equal. 2025-03-17T18:45:12.9444049Z Approximately equal is defined as the number of significant digits 2025-03-17T18:45:12.9444493Z that agree. 2025-03-17T18:45:12.9444625Z 2025-03-17T18:45:12.9444738Z Parameters 2025-03-17T18:45:12.9444958Z ---------- 2025-03-17T18:45:12.9445194Z actual : scalar 2025-03-17T18:45:12.9445449Z The object to check. 2025-03-17T18:45:12.9445727Z desired : scalar 2025-03-17T18:45:12.9445988Z The expected object. 2025-03-17T18:45:12.9446277Z significant : int, optional 2025-03-17T18:45:12.9446587Z Desired precision, default is 7. 2025-03-17T18:45:12.9446913Z err_msg : str, optional 2025-03-17T18:45:12.9447239Z The error message to be printed in case of failure. 2025-03-17T18:45:12.9447620Z verbose : bool, optional 2025-03-17T18:45:12.9448011Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:12.9448337Z 2025-03-17T18:45:12.9448437Z Raises 2025-03-17T18:45:12.9448655Z ------ 2025-03-17T18:45:12.9448882Z AssertionError 2025-03-17T18:45:12.9449232Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:12.9449551Z 2025-03-17T18:45:12.9449655Z See Also 2025-03-17T18:45:12.9449879Z -------- 2025-03-17T18:45:12.9450252Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:12.9450756Z relative and/or absolute precision. 2025-03-17T18:45:12.9451213Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:12.9451558Z 2025-03-17T18:45:12.9451648Z Examples 2025-03-17T18:45:12.9451873Z -------- 2025-03-17T18:45:12.9452283Z >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) # doctest: +SKIP 2025-03-17T18:45:12.9452945Z >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, # doctest: +SKIP 2025-03-17T18:45:12.9453450Z ... significant=8) 2025-03-17T18:45:12.9453945Z >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, # doctest: +SKIP 2025-03-17T18:45:12.9454449Z ... significant=8) 2025-03-17T18:45:12.9454811Z Traceback (most recent call last): 2025-03-17T18:45:12.9455121Z ... 2025-03-17T18:45:12.9455349Z AssertionError: 2025-03-17T18:45:12.9455638Z Items are not equal to 8 significant digits: 2025-03-17T18:45:12.9455989Z ACTUAL: 1.234567e-21 2025-03-17T18:45:12.9456267Z DESIRED: 1.2345672e-21 2025-03-17T18:45:12.9456431Z 2025-03-17T18:45:12.9456697Z the evaluated condition that raises the exception is 2025-03-17T18:45:12.9456978Z 2025-03-17T18:45:12.9457171Z >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) 2025-03-17T18:45:12.9457556Z True 2025-03-17T18:45:12.9457673Z 2025-03-17T18:45:12.9457677Z 2025-03-17T18:45:12.9457950Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.9458327Z 2025-03-17T18:45:12.9458905Z msg = Cannot scrape callname=assert_array_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=730. 2025-03-17T18:45:12.9459863Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.9460247Z 2025-03-17T18:45:12.9460474Z Raises an AssertionError if two array_like objects are not equal. 2025-03-17T18:45:12.9460801Z 2025-03-17T18:45:12.9461023Z Given two array_like objects, check that the shape is equal and all 2025-03-17T18:45:12.9461586Z elements of these objects are equal (but see the Notes for the special 2025-03-17T18:45:12.9462143Z handling of a scalar). An exception is raised at shape mismatch or 2025-03-17T18:45:12.9462701Z conflicting values. In contrast to the standard usage in numpy, NaNs 2025-03-17T18:45:12.9463283Z are compared like numbers, no assertion is raised if both objects have 2025-03-17T18:45:12.9463744Z NaNs in the same positions. 2025-03-17T18:45:12.9463932Z 2025-03-17T18:45:12.9464177Z The usual caution for verifying equality with floating point numbers is 2025-03-17T18:45:12.9464630Z advised. 2025-03-17T18:45:12.9464756Z 2025-03-17T18:45:12.9464935Z Parameters 2025-03-17T18:45:12.9465171Z ---------- 2025-03-17T18:45:12.9465406Z x : array_like 2025-03-17T18:45:12.9465671Z The actual object to check. 2025-03-17T18:45:12.9465971Z y : array_like 2025-03-17T18:45:12.9466236Z The desired, expected object. 2025-03-17T18:45:12.9466663Z err_msg : str, optional 2025-03-17T18:45:12.9466999Z The error message to be printed in case of failure. 2025-03-17T18:45:12.9467393Z verbose : bool, optional 2025-03-17T18:45:12.9467787Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:12.9468226Z strict : bool, optional 2025-03-17T18:45:12.9468605Z If True, raise an AssertionError when either the shape or the data 2025-03-17T18:45:12.9469117Z type of the array_like objects does not match. The special 2025-03-17T18:45:12.9469626Z handling for scalars mentioned in the Notes section is disabled. 2025-03-17T18:45:12.9469950Z 2025-03-17T18:45:12.9470052Z Raises 2025-03-17T18:45:12.9470270Z ------ 2025-03-17T18:45:12.9470487Z AssertionError 2025-03-17T18:45:12.9470770Z If actual and desired objects are not equal. 2025-03-17T18:45:12.9471027Z 2025-03-17T18:45:12.9471118Z See Also 2025-03-17T18:45:12.9471335Z -------- 2025-03-17T18:45:12.9471709Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:12.9472210Z relative and/or absolute precision. 2025-03-17T18:45:12.9472683Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:12.9473027Z 2025-03-17T18:45:12.9473114Z Notes 2025-03-17T18:45:12.9473328Z ----- 2025-03-17T18:45:12.9473645Z When one of `x` and `y` is a scalar and the other is array_like, the 2025-03-17T18:45:12.9474188Z function checks that each element of the array_like object is equal to 2025-03-17T18:45:12.9474933Z the scalar. This behaviour can be disabled with the `strict` parameter. 2025-03-17T18:45:12.9475292Z 2025-03-17T18:45:12.9475414Z Examples 2025-03-17T18:45:12.9475735Z -------- 2025-03-17T18:45:12.9476215Z The first assert does not raise an exception: 2025-03-17T18:45:12.9476655Z 2025-03-17T18:45:12.9476828Z >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:12.9477237Z ... [np.exp(0),2.33333, np.nan]) 2025-03-17T18:45:12.9477477Z 2025-03-17T18:45:12.9477720Z Use `assert_allclose` or one of the nulp (number of floating point values) 2025-03-17T18:45:12.9478292Z functions for these cases instead: 2025-03-17T18:45:12.9478506Z 2025-03-17T18:45:12.9478665Z >>> np.testing.assert_allclose([1.0,np.pi,np.nan], 2025-03-17T18:45:12.9479066Z ... [1, np.sqrt(np.pi)**2, np.nan], 2025-03-17T18:45:12.9479440Z ... rtol=1e-10, atol=0) 2025-03-17T18:45:12.9479670Z 2025-03-17T18:45:12.9479893Z As mentioned in the Notes section, `assert_array_equal` has special 2025-03-17T18:45:12.9480456Z handling for scalars. Here the test checks that each value in `x` is 3: 2025-03-17T18:45:12.9480804Z 2025-03-17T18:45:12.9480930Z >>> x = np.full((2, 5), fill_value=3) 2025-03-17T18:45:12.9481266Z >>> np.testing.assert_array_equal(x, 3) 2025-03-17T18:45:12.9481505Z 2025-03-17T18:45:12.9481726Z Use `strict` to raise an AssertionError when comparing a scalar with an 2025-03-17T18:45:12.9482164Z array: 2025-03-17T18:45:12.9482300Z 2025-03-17T18:45:12.9482454Z >>> np.testing.assert_array_equal(x, 3, strict=True) 2025-03-17T18:45:12.9482851Z Traceback (most recent call last): 2025-03-17T18:45:12.9483163Z ... 2025-03-17T18:45:12.9483398Z AssertionError: 2025-03-17T18:45:12.9483652Z Arrays are not equal 2025-03-17T18:45:12.9483919Z 2025-03-17T18:45:12.9484162Z (shapes (2, 5), () mismatch) 2025-03-17T18:45:12.9484469Z x: torch.ndarray([[3, 3, 3, 3, 3], 2025-03-17T18:45:12.9484778Z [3, 3, 3, 3, 3]]) 2025-03-17T18:45:12.9485055Z y: torch.ndarray(3) 2025-03-17T18:45:12.9485213Z 2025-03-17T18:45:12.9485446Z The `strict` parameter also ensures that the array data types match: 2025-03-17T18:45:12.9485781Z 2025-03-17T18:45:12.9485963Z >>> x = np.array([2, 2, 2]) 2025-03-17T18:45:12.9486281Z >>> y = np.array([2., 2., 2.], dtype=np.float32) 2025-03-17T18:45:12.9486692Z >>> np.testing.assert_array_equal(x, y, strict=True) 2025-03-17T18:45:12.9487086Z Traceback (most recent call last): 2025-03-17T18:45:12.9487397Z ... 2025-03-17T18:45:12.9487617Z AssertionError: 2025-03-17T18:45:12.9487873Z Arrays are not equal 2025-03-17T18:45:12.9488136Z 2025-03-17T18:45:12.9488419Z (dtypes dtype("int64"), dtype("float32") mismatch) 2025-03-17T18:45:12.9488793Z x: torch.ndarray([2, 2, 2]) 2025-03-17T18:45:12.9489092Z y: torch.ndarray([2., 2., 2.]) 2025-03-17T18:45:12.9489300Z 2025-03-17T18:45:12.9489559Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.9489946Z 2025-03-17T18:45:12.9490543Z msg = Cannot scrape callname=assert_array_almost_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=836. 2025-03-17T18:45:12.9491530Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.9491935Z 2025-03-17T18:45:12.9492155Z Raises an AssertionError if two objects are not equal up to desired 2025-03-17T18:45:12.9492595Z precision. 2025-03-17T18:45:12.9492744Z 2025-03-17T18:45:12.9492928Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:12.9493416Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:12.9493925Z instead of this function for more consistent floating point 2025-03-17T18:45:12.9494343Z comparisons. 2025-03-17T18:45:12.9494505Z 2025-03-17T18:45:12.9494766Z The test verifies identical shapes and that the elements of ``actual`` and 2025-03-17T18:45:12.9495237Z ``desired`` satisfy. 2025-03-17T18:45:12.9495396Z 2025-03-17T18:45:12.9495544Z ``abs(desired-actual) < 1.5 * 10**(-decimal)`` 2025-03-17T18:45:12.9495792Z 2025-03-17T18:45:12.9496041Z That is a looser test than originally documented, but agrees with what the 2025-03-17T18:45:12.9496650Z actual implementation did up to rounding vagaries. An exception is raised 2025-03-17T18:45:12.9497268Z at shape mismatch or conflicting values. In contrast to the standard usage 2025-03-17T18:45:12.9497863Z in numpy, NaNs are compared like numbers, no assertion is raised if both 2025-03-17T18:45:12.9498346Z objects have NaNs in the same positions. 2025-03-17T18:45:12.9498578Z 2025-03-17T18:45:12.9498744Z Parameters 2025-03-17T18:45:12.9498981Z ---------- 2025-03-17T18:45:12.9499217Z x : array_like 2025-03-17T18:45:12.9499469Z The actual object to check. 2025-03-17T18:45:12.9499767Z y : array_like 2025-03-17T18:45:12.9500029Z The desired, expected object. 2025-03-17T18:45:12.9500348Z decimal : int, optional 2025-03-17T18:45:12.9500640Z Desired precision, default is 6. 2025-03-17T18:45:12.9500968Z err_msg : str, optional 2025-03-17T18:45:12.9501299Z The error message to be printed in case of failure. 2025-03-17T18:45:12.9501680Z verbose : bool, optional 2025-03-17T18:45:12.9502077Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:12.9502402Z 2025-03-17T18:45:12.9502502Z Raises 2025-03-17T18:45:12.9502720Z ------ 2025-03-17T18:45:12.9502947Z AssertionError 2025-03-17T18:45:12.9503297Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:12.9503611Z 2025-03-17T18:45:12.9503714Z See Also 2025-03-17T18:45:12.9503940Z -------- 2025-03-17T18:45:12.9504311Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:12.9504800Z relative and/or absolute precision. 2025-03-17T18:45:12.9505264Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:12.9505603Z 2025-03-17T18:45:12.9505693Z Examples 2025-03-17T18:45:12.9505913Z -------- 2025-03-17T18:45:12.9506177Z the first assert does not raise an exception 2025-03-17T18:45:12.9506431Z 2025-03-17T18:45:12.9506702Z >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], 2025-03-17T18:45:12.9507195Z ... [1.0,2.333,np.nan]) 2025-03-17T18:45:12.9507444Z 2025-03-17T18:45:12.9507626Z >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:12.9508061Z ... [1.0,2.33339,np.nan], decimal=5) 2025-03-17T18:45:12.9508439Z Traceback (most recent call last): 2025-03-17T18:45:12.9508749Z ... 2025-03-17T18:45:12.9508984Z AssertionError: 2025-03-17T18:45:12.9509266Z Arrays are not almost equal to 5 decimals 2025-03-17T18:45:12.9509611Z 2025-03-17T18:45:12.9509862Z Mismatched elements: 1 / 3 (33.3%) 2025-03-17T18:45:12.9510210Z Max absolute difference: 5.999999999994898e-05 2025-03-17T18:45:12.9510599Z Max relative difference: 2.5713661239633743e-05 2025-03-17T18:45:12.9511021Z x: torch.ndarray([1.0000, 2.3333, nan], dtype=float64) 2025-03-17T18:45:12.9511471Z y: torch.ndarray([1.0000, 2.3334, nan], dtype=float64) 2025-03-17T18:45:12.9511750Z 2025-03-17T18:45:12.9511947Z >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:12.9512376Z ... [1.0,2.33333, 5], decimal=5) 2025-03-17T18:45:12.9512751Z Traceback (most recent call last): 2025-03-17T18:45:12.9513047Z ... 2025-03-17T18:45:12.9513275Z AssertionError: 2025-03-17T18:45:12.9513552Z Arrays are not almost equal to 5 decimals 2025-03-17T18:45:12.9513889Z 2025-03-17T18:45:12.9514140Z x and y nan location mismatch: 2025-03-17T18:45:12.9514518Z x: torch.ndarray([1.0000, 2.3333, nan], dtype=float64) 2025-03-17T18:45:12.9514966Z y: torch.ndarray([1.0000, 2.3333, 5.0000], dtype=float64) 2025-03-17T18:45:12.9515253Z 2025-03-17T18:45:12.9515257Z 2025-03-17T18:45:12.9515520Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.9515912Z 2025-03-17T18:45:12.9516534Z msg = Cannot scrape callname=clear_and_catch_warnings in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=1786. 2025-03-17T18:45:12.9517526Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:12.9518147Z Context manager that resets warning registry for catching warnings 2025-03-17T18:45:12.9518495Z 2025-03-17T18:45:12.9518747Z Warnings can be slippery, because, whenever a warning is triggered, Python 2025-03-17T18:45:12.9519426Z adds a ``__warningregistry__`` member to the *calling* module. This makes 2025-03-17T18:45:12.9520034Z it impossible to retrigger the warning in this module, whatever you put in 2025-03-17T18:45:12.9520657Z the warnings filters. This context manager accepts a sequence of `modules` 2025-03-17T18:45:12.9521181Z as a keyword argument to its constructor and: 2025-03-17T18:45:12.9521450Z 2025-03-17T18:45:12.9521685Z * stores and removes any ``__warningregistry__`` entries in given `modules` 2025-03-17T18:45:12.9522144Z on entry; 2025-03-17T18:45:12.9522491Z * resets ``__warningregistry__`` to its previous state on exit. 2025-03-17T18:45:12.9522814Z 2025-03-17T18:45:12.9523043Z This makes it possible to trigger any warning afresh inside the context 2025-03-17T18:45:12.9523588Z manager without disturbing the state of warnings outside. 2025-03-17T18:45:12.9523903Z 2025-03-17T18:45:12.9524143Z For compatibility with Python 3.0, please consider all arguments to be 2025-03-17T18:45:12.9524618Z keyword-only. 2025-03-17T18:45:12.9524779Z 2025-03-17T18:45:12.9524877Z Parameters 2025-03-17T18:45:12.9525119Z ---------- 2025-03-17T18:45:12.9525371Z record : bool, optional 2025-03-17T18:45:12.9525758Z Specifies whether warnings should be captured by a custom 2025-03-17T18:45:12.9526316Z implementation of ``warnings.showwarning()`` and be appended to a list 2025-03-17T18:45:12.9526895Z returned by the context manager. Otherwise None is returned by the 2025-03-17T18:45:12.9527462Z context manager. The objects appended to the list are arguments whose 2025-03-17T18:45:12.9528052Z attributes mirror the arguments to ``showwarning()``. 2025-03-17T18:45:12.9528462Z modules : sequence, optional 2025-03-17T18:45:12.9528894Z Sequence of modules for which to reset warnings registry on entry and 2025-03-17T18:45:12.9529432Z restore on exit. To work correctly, all 'ignore' filters should 2025-03-17T18:45:12.9529874Z filter by one of these modules. 2025-03-17T18:45:12.9530108Z 2025-03-17T18:45:12.9530200Z Examples 2025-03-17T18:45:12.9530439Z -------- 2025-03-17T18:45:12.9530686Z >>> import warnings 2025-03-17T18:45:12.9531052Z >>> with np.testing.clear_and_catch_warnings( # doctest: +SKIP 2025-03-17T18:45:12.9531492Z ... modules=[np.core.fromnumeric]): 2025-03-17T18:45:12.9531868Z ... warnings.simplefilter('always') 2025-03-17T18:45:12.9532335Z ... warnings.filterwarnings('ignore', module='np.core.fromnumeric') 2025-03-17T18:45:12.9532863Z ... # do something that raises a warning but ignore those in 2025-03-17T18:45:12.9533278Z ... # np.core.fromnumeric 2025-03-17T18:45:12.9533573Z 2025-03-17T18:45:12.9533953Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:12.9534327Z 2025-03-17T18:45:13.2889982Z msg = Cannot scrape callname=Conv1d in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py line=354. 2025-03-17T18:45:13.2890975Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.2891602Z Applies a 1D convolution over a quantized input signal composed of 2025-03-17T18:45:13.2892068Z several quantized input planes. 2025-03-17T18:45:13.2892289Z 2025-03-17T18:45:13.2892521Z For details on input arguments, parameters, and implementation see 2025-03-17T18:45:13.2892977Z :class:`~torch.nn.Conv1d`. 2025-03-17T18:45:13.2893173Z 2025-03-17T18:45:13.2893293Z .. note:: 2025-03-17T18:45:13.2893642Z Only `zeros` is supported for the :attr:`padding_mode` argument. 2025-03-17T18:45:13.2893974Z 2025-03-17T18:45:13.2894080Z .. note:: 2025-03-17T18:45:13.2894407Z Only `torch.quint8` is supported for the input data type. 2025-03-17T18:45:13.2894707Z 2025-03-17T18:45:13.2894711Z 2025-03-17T18:45:13.2894822Z Attributes: 2025-03-17T18:45:13.2895190Z weight (Tensor): packed tensor derived from the learnable weight 2025-03-17T18:45:13.2895879Z parameter. 2025-03-17T18:45:13.2896241Z scale (Tensor): scalar for the output scale 2025-03-17T18:45:13.2896679Z zero_point (Tensor): scalar for the output zero point 2025-03-17T18:45:13.2896975Z 2025-03-17T18:45:13.2897134Z See :class:`~torch.nn.Conv1d` for other attributes. 2025-03-17T18:45:13.2897414Z 2025-03-17T18:45:13.2897512Z Examples:: 2025-03-17T18:45:13.2897668Z 2025-03-17T18:45:13.2897825Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_QENGINE) 2025-03-17T18:45:13.2898239Z >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2) 2025-03-17T18:45:13.2898620Z >>> input = torch.randn(20, 16, 100) 2025-03-17T18:45:13.2898972Z >>> # quantize input to quint8 2025-03-17T18:45:13.2899302Z >>> # xdoctest: +SKIP 2025-03-17T18:45:13.2899717Z >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, 2025-03-17T18:45:13.2900199Z ... dtype=torch.quint8) 2025-03-17T18:45:13.2900566Z >>> output = m(q_input) 2025-03-17T18:45:13.2900771Z 2025-03-17T18:45:13.2900871Z 2025-03-17T18:45:13.2901259Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.2901656Z 2025-03-17T18:45:13.3116259Z msg = Cannot scrape callname=LSTM in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/rnn.py line=11. 2025-03-17T18:45:13.3117217Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.3117754Z A quantized long short-term memory (LSTM). 2025-03-17T18:45:13.3118198Z 2025-03-17T18:45:13.3118501Z For the description and the argument types, please, refer to :class:`~torch.nn.LSTM` 2025-03-17T18:45:13.3118922Z 2025-03-17T18:45:13.3119020Z Attributes: 2025-03-17T18:45:13.3119303Z layers : instances of the `_LSTMLayer` 2025-03-17T18:45:13.3119556Z 2025-03-17T18:45:13.3119658Z .. note:: 2025-03-17T18:45:13.3120036Z To access the weights and biases, you need to access them per layer. 2025-03-17T18:45:13.3120563Z See examples in :class:`~torch.ao.nn.quantizable.LSTM` 2025-03-17T18:45:13.3120858Z 2025-03-17T18:45:13.3120970Z Examples:: 2025-03-17T18:45:13.3121228Z >>> # xdoctest: +SKIP 2025-03-17T18:45:13.3121540Z >>> custom_module_config = { 2025-03-17T18:45:13.3121908Z ... 'float_to_observed_custom_module_class': { 2025-03-17T18:45:13.3122307Z ... nn.LSTM: nn.quantizable.LSTM, 2025-03-17T18:45:13.3122651Z ... }, 2025-03-17T18:45:13.3122961Z ... 'observed_to_quantized_custom_module_class': { 2025-03-17T18:45:13.3123391Z ... nn.quantizable.LSTM: nn.quantized.LSTM, 2025-03-17T18:45:13.3123743Z ... } 2025-03-17T18:45:13.3123983Z ... } 2025-03-17T18:45:13.3124352Z >>> tq.prepare(model, prepare_custom_module_class=custom_module_config) 2025-03-17T18:45:13.3124927Z >>> tq.convert(model, convert_custom_module_class=custom_module_config) 2025-03-17T18:45:13.3125394Z 2025-03-17T18:45:13.3125780Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.3126171Z 2025-03-17T18:45:13.4201513Z msg = Cannot scrape callname=BaseSparsifier.squash_mask in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py line=227. 2025-03-17T18:45:13.4202637Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.4203218Z Squashes the sparse masks into the appropriate tensors. 2025-03-17T18:45:13.4203546Z 2025-03-17T18:45:13.4203760Z If either the `params_to_keep` or `params_to_keep_per_layer` is set, 2025-03-17T18:45:13.4204290Z the module will have a `sparse_params` dict attached to it. 2025-03-17T18:45:13.4204607Z 2025-03-17T18:45:13.4204699Z Args: 2025-03-17T18:45:13.4205228Z params_to_keep: List of keys to save in the module or a dict 2025-03-17T18:45:13.4205718Z representing the modules and keys that will have 2025-03-17T18:45:13.4206140Z sparsity parameters saved 2025-03-17T18:45:13.4206610Z params_to_keep_per_layer: Dict to specify the params that should be 2025-03-17T18:45:13.4207117Z saved for specific layers. The keys in the dict 2025-03-17T18:45:13.4207569Z should be the module fqn, while the values should 2025-03-17T18:45:13.4208024Z be a list of strings with the names of the variables 2025-03-17T18:45:13.4208452Z to save in the `sparse_params` 2025-03-17T18:45:13.4208712Z 2025-03-17T18:45:13.4208809Z Examples: 2025-03-17T18:45:13.4209102Z >>> # xdoctest: +SKIP("locals are undefined") 2025-03-17T18:45:13.4209491Z >>> # Don't save any sparse params 2025-03-17T18:45:13.4209880Z >>> sparsifier.squash_mask() 2025-03-17T18:45:13.4210264Z >>> hasattr(model.submodule1, 'sparse_params') 2025-03-17T18:45:13.4210623Z False 2025-03-17T18:45:13.4210770Z 2025-03-17T18:45:13.4210913Z >>> # Keep sparse params per layer 2025-03-17T18:45:13.4211279Z >>> sparsifier.squash_mask( 2025-03-17T18:45:13.4211626Z ... params_to_keep_per_layer={ 2025-03-17T18:45:13.4212001Z ... 'submodule1.linear1': ('foo', 'bar'), 2025-03-17T18:45:13.4212393Z ... 'submodule2.linear42': ('baz',) 2025-03-17T18:45:13.4212737Z ... }) 2025-03-17T18:45:13.4213162Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:13.4213540Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:13.4213919Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:13.4214308Z {'baz': 0.1} 2025-03-17T18:45:13.4214492Z 2025-03-17T18:45:13.4214627Z >>> # Keep sparse params for all layers 2025-03-17T18:45:13.4215067Z >>> sparsifier.squash_mask(params_to_keep=('foo', 'bar')) 2025-03-17T18:45:13.4215533Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:13.4215921Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:13.4216290Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:13.4216677Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:13.4216886Z 2025-03-17T18:45:13.4217089Z >>> # Keep some sparse params for all layers, and specific ones for 2025-03-17T18:45:13.4217527Z >>> # some other layers 2025-03-17T18:45:13.4217861Z >>> sparsifier.squash_mask( 2025-03-17T18:45:13.4218210Z ... params_to_keep=('foo', 'bar'), 2025-03-17T18:45:13.4218577Z ... params_to_keep_per_layer={ 2025-03-17T18:45:13.4218943Z ... 'submodule2.linear42': ('baz',) 2025-03-17T18:45:13.4219287Z ... }) 2025-03-17T18:45:13.4219617Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:13.4220003Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:13.4220372Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:13.4220770Z {'foo': 42, 'bar': 24, 'baz': 0.1} 2025-03-17T18:45:13.4221094Z 2025-03-17T18:45:13.4221471Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.4221862Z 2025-03-17T18:45:13.5179563Z msg = Cannot scrape callname=DTypeConfig in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/backend_config/backend_config.py line=181. 2025-03-17T18:45:13.5180685Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.5181075Z 2025-03-17T18:45:13.5181351Z Config object that specifies the supported data types passed as arguments to 2025-03-17T18:45:13.5181988Z quantize ops in the reference model spec, for input and output activations, 2025-03-17T18:45:13.5182663Z weights, and biases. 2025-03-17T18:45:13.5182830Z 2025-03-17T18:45:13.5182996Z For example, consider the following reference model: 2025-03-17T18:45:13.5183290Z 2025-03-17T18:45:13.5183454Z quant1 - [dequant1 - fp32_linear - quant2] - dequant2 2025-03-17T18:45:13.5183760Z 2025-03-17T18:45:13.5183993Z The pattern in the square brackets refers to the reference pattern of 2025-03-17T18:45:13.5184581Z statically quantized linear. Setting the input dtype as `torch.quint8` 2025-03-17T18:45:13.5185187Z in the DTypeConfig means we pass in `torch.quint8` as the dtype argument 2025-03-17T18:45:13.5185790Z to the first quantize op (quant1). Similarly, setting the output dtype as 2025-03-17T18:45:13.5186381Z `torch.quint8` means we pass in `torch.quint8` as the dtype argument to 2025-03-17T18:45:13.5186899Z the second quantize op (quant2). 2025-03-17T18:45:13.5187120Z 2025-03-17T18:45:13.5187344Z Note that the dtype here does not refer to the interface dtypes of the 2025-03-17T18:45:13.5187914Z op. For example, the "input dtype" here is not the dtype of the input 2025-03-17T18:45:13.5188476Z tensor passed to the quantized linear op. Though it can still be the 2025-03-17T18:45:13.5189029Z same as the interface dtype, this is not always the case, e.g. the 2025-03-17T18:45:13.5189585Z interface dtype is fp32 in dynamic quantization but the "input dtype" 2025-03-17T18:45:13.5190151Z specified in the DTypeConfig would still be quint8. The semantics of 2025-03-17T18:45:13.5190708Z dtypes here are the same as the semantics of the dtypes specified in 2025-03-17T18:45:13.5191147Z the observers. 2025-03-17T18:45:13.5191416Z 2025-03-17T18:45:13.5191631Z These dtypes are matched against the ones specified in the user's 2025-03-17T18:45:13.5192219Z QConfig. If there is a match, and the QConfig satisfies the constraints 2025-03-17T18:45:13.5192802Z specified in the DTypeConfig (if any), then we will quantize the given 2025-03-17T18:45:13.5193392Z pattern using this DTypeConfig. Otherwise, the QConfig is ignored and 2025-03-17T18:45:13.5193875Z the pattern will not be quantized. 2025-03-17T18:45:13.5194089Z 2025-03-17T18:45:13.5194206Z Example usage:: 2025-03-17T18:45:13.5194366Z 2025-03-17T18:45:13.5194479Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:13.5194810Z >>> dtype_config1 = DTypeConfig( 2025-03-17T18:45:13.5195154Z ... input_dtype=torch.quint8, 2025-03-17T18:45:13.5195500Z ... output_dtype=torch.quint8, 2025-03-17T18:45:13.5195842Z ... weight_dtype=torch.qint8, 2025-03-17T18:45:13.5196179Z ... bias_dtype=torch.float) 2025-03-17T18:45:13.5196402Z 2025-03-17T18:45:13.5196522Z >>> dtype_config2 = DTypeConfig( 2025-03-17T18:45:13.5196885Z ... input_dtype=DTypeWithConstraints( 2025-03-17T18:45:13.5197248Z ... dtype=torch.quint8, 2025-03-17T18:45:13.5197575Z ... quant_min_lower_bound=0, 2025-03-17T18:45:13.5197923Z ... quant_max_upper_bound=255, 2025-03-17T18:45:13.5198248Z ... ), 2025-03-17T18:45:13.5198533Z ... output_dtype=DTypeWithConstraints( 2025-03-17T18:45:13.5198896Z ... dtype=torch.quint8, 2025-03-17T18:45:13.5199220Z ... quant_min_lower_bound=0, 2025-03-17T18:45:13.5199562Z ... quant_max_upper_bound=255, 2025-03-17T18:45:13.5199880Z ... ), 2025-03-17T18:45:13.5200140Z ... weight_dtype=DTypeWithConstraints( 2025-03-17T18:45:13.5200499Z ... dtype=torch.qint8, 2025-03-17T18:45:13.5200823Z ... quant_min_lower_bound=-128, 2025-03-17T18:45:13.5201173Z ... quant_max_upper_bound=127, 2025-03-17T18:45:13.5201492Z ... ), 2025-03-17T18:45:13.5201747Z ... bias_dtype=torch.float) 2025-03-17T18:45:13.5201966Z 2025-03-17T18:45:13.5202082Z >>> dtype_config1.input_dtype 2025-03-17T18:45:13.5202392Z torch.quint8 2025-03-17T18:45:13.5202550Z 2025-03-17T18:45:13.5202663Z >>> dtype_config2.input_dtype 2025-03-17T18:45:13.5202974Z torch.quint8 2025-03-17T18:45:13.5203119Z 2025-03-17T18:45:13.5203348Z >>> dtype_config2.input_dtype_with_constraints 2025-03-17T18:45:13.5204166Z DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None) 2025-03-17T18:45:13.5204833Z 2025-03-17T18:45:13.5205109Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.5205503Z 2025-03-17T18:45:13.6368154Z msg = Cannot scrape callname=ModelReportVisualizer.generate_filtered_tables in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=301. 2025-03-17T18:45:13.6369742Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.6370505Z 2025-03-17T18:45:13.6370812Z Takes in optional filter values and generates two tables with desired information. 2025-03-17T18:45:13.6371221Z 2025-03-17T18:45:13.6371438Z The generated tables are presented in both a list-of-lists format 2025-03-17T18:45:13.6371792Z 2025-03-17T18:45:13.6372004Z The reason for the two tables are that they handle different things: 2025-03-17T18:45:13.6372515Z 1.) the first table handles all tensor level information 2025-03-17T18:45:13.6373019Z 2.) the second table handles and displays all channel based information 2025-03-17T18:45:13.6373367Z 2025-03-17T18:45:13.6373805Z The reasoning for this is that having all the info in one table can make it ambiguous which collected 2025-03-17T18:45:13.6375095Z statistics are global, and which are actually per-channel, so it's better to split it up into two 2025-03-17T18:45:13.6376150Z tables. This also makes the information much easier to digest given the plethora of statistics collected 2025-03-17T18:45:13.6376712Z 2025-03-17T18:45:13.6376867Z Tensor table columns: 2025-03-17T18:45:13.6377245Z idx layer_fqn feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:13.6377917Z ---- --------- --------- --------- --------- --------- 2025-03-17T18:45:13.6378431Z 2025-03-17T18:45:13.6378548Z Per-Channel table columns: 2025-03-17T18:45:13.6378972Z idx layer_fqn channel feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:13.6379496Z ---- --------- ------- --------- --------- --------- --------- 2025-03-17T18:45:13.6379798Z 2025-03-17T18:45:13.6379888Z Args: 2025-03-17T18:45:13.6380291Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:13.6380809Z contain this filter substring 2025-03-17T18:45:13.6381204Z Default = "", results in all the features being printed 2025-03-17T18:45:13.6381771Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:13.6382452Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:13.6382835Z 2025-03-17T18:45:13.6382956Z Returns a dictionary with two keys: 2025-03-17T18:45:13.6383363Z (Dict[str, Tuple[List, List]]) A dict containing two keys: 2025-03-17T18:45:13.6383793Z "tensor_level_info", "channel_level_info" 2025-03-17T18:45:13.6384154Z Each key maps to a tuple with: 2025-03-17T18:45:13.6384503Z A list of the headers of each table 2025-03-17T18:45:13.6384931Z A list of lists containing the table information row by row 2025-03-17T18:45:13.6385418Z The 0th index row will contain the headers of the columns 2025-03-17T18:45:13.6385848Z The rest of the rows will contain data 2025-03-17T18:45:13.6386089Z 2025-03-17T18:45:13.6386198Z Example Use: 2025-03-17T18:45:13.6386556Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:13.6386964Z >>> mod_report_visualizer.generate_filtered_tables( 2025-03-17T18:45:13.6387369Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:13.6387729Z ... module_fqn_filter = "block1" 2025-03-17T18:45:13.6388386Z ... ) # generates table with per_channel_min info for all modules in block 1 of the model 2025-03-17T18:45:13.6388790Z 2025-03-17T18:45:13.6389064Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.6389443Z 2025-03-17T18:45:13.6390382Z msg = Cannot scrape callname=ModelReportVisualizer.generate_table_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=400. 2025-03-17T18:45:13.6391693Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.6392079Z 2025-03-17T18:45:13.6392373Z Takes in optional filter values and prints out formatted tables of the information. 2025-03-17T18:45:13.6392781Z 2025-03-17T18:45:13.6393142Z The reason for the two tables printed out instead of one large one are that they handle different things: 2025-03-17T18:45:13.6393789Z 1.) the first table handles all tensor level information 2025-03-17T18:45:13.6394303Z 2.) the second table handles and displays all channel based information 2025-03-17T18:45:13.6394643Z 2025-03-17T18:45:13.6394981Z The reasoning for this is that having all the info in one table can make it ambiguous which collected 2025-03-17T18:45:13.6395775Z statistics are global, and which are actually per-channel, so it's better to split it up into two 2025-03-17T18:45:13.6396604Z tables. This also makes the information much easier to digest given the plethora of statistics collected 2025-03-17T18:45:13.6397087Z 2025-03-17T18:45:13.6397202Z Tensor table columns: 2025-03-17T18:45:13.6397572Z idx layer_fqn feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:13.6398110Z ---- --------- --------- --------- --------- --------- 2025-03-17T18:45:13.6398382Z 2025-03-17T18:45:13.6398510Z Per-Channel table columns: 2025-03-17T18:45:13.6398698Z 2025-03-17T18:45:13.6398934Z idx layer_fqn channel feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:13.6399461Z ---- --------- ------- --------- --------- --------- --------- 2025-03-17T18:45:13.6399750Z 2025-03-17T18:45:13.6399856Z Args: 2025-03-17T18:45:13.6400264Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:13.6400786Z contain this filter substring 2025-03-17T18:45:13.6401172Z Default = "", results in all the features being printed 2025-03-17T18:45:13.6401741Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:13.6402403Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:13.6402799Z 2025-03-17T18:45:13.6402900Z Example Use: 2025-03-17T18:45:13.6403182Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:13.6403609Z >>> mod_report_visualizer.generate_table_visualization( 2025-03-17T18:45:13.6404038Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:13.6404404Z ... module_fqn_filter = "block1" 2025-03-17T18:45:13.6404728Z ... ) 2025-03-17T18:45:13.6405063Z >>> # prints out neatly formatted table with per_channel_min info 2025-03-17T18:45:13.6405514Z >>> # for all modules in block 1 of the model 2025-03-17T18:45:13.6405766Z 2025-03-17T18:45:13.6406029Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.6406421Z 2025-03-17T18:45:13.6407340Z msg = Cannot scrape callname=ModelReportVisualizer.generate_plot_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=566. 2025-03-17T18:45:13.6408650Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.6409049Z 2025-03-17T18:45:13.6409293Z Takes in a feature and optional module_filter and plots of the desired data. 2025-03-17T18:45:13.6409671Z 2025-03-17T18:45:13.6409948Z For per channel features, it averages the value across the channels and plots a point 2025-03-17T18:45:13.6410681Z per module. The reason for this is that for models with hundreds of channels, it can 2025-03-17T18:45:13.6411366Z be hard to differentiate one channel line from another, and so the point of generating 2025-03-17T18:45:13.6412055Z a single average point per module is to give a sense of general trends that encourage 2025-03-17T18:45:13.6412563Z further deep dives. 2025-03-17T18:45:13.6412733Z 2025-03-17T18:45:13.6412822Z Note: 2025-03-17T18:45:13.6413225Z Only features in the report that have tensor value data are plottable by this class 2025-03-17T18:45:13.6413799Z When the tensor information is plotted, it will plot: 2025-03-17T18:45:13.6414236Z idx as the x val, feature value as the y_val 2025-03-17T18:45:13.6414673Z When the channel information is plotted, it will plot: 2025-03-17T18:45:13.6415244Z the first idx of each module as the x val, feature value as the y_val [for each channel] 2025-03-17T18:45:13.6415884Z The reason for this is that we want to be able to compare values across the 2025-03-17T18:45:13.6416496Z channels for same layer, and it will be hard if values are staggered by idx 2025-03-17T18:45:13.6417050Z This means each module is represented by only 1 x value 2025-03-17T18:45:13.6417438Z Args: 2025-03-17T18:45:13.6417801Z feature_filter (str): Filters the features presented to only those that 2025-03-17T18:45:13.6418277Z contain this filter substring 2025-03-17T18:45:13.6418757Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:13.6419408Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:13.6419858Z 2025-03-17T18:45:13.6419954Z Example Use: 2025-03-17T18:45:13.6420232Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:13.6420651Z >>> mod_report_visualizer.generate_plot_visualization( 2025-03-17T18:45:13.6421071Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:13.6421445Z ... module_fqn_filter = "block1" 2025-03-17T18:45:13.6421769Z ... ) 2025-03-17T18:45:13.6422091Z >>> # outputs line plot of per_channel_min information for all 2025-03-17T18:45:13.6422593Z >>> # modules in block1 of model each channel gets it's own line, 2025-03-17T18:45:13.6423089Z >>> # and it's plotted across the in-order modules on the x-axis 2025-03-17T18:45:13.6423402Z 2025-03-17T18:45:13.6423662Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.6424050Z 2025-03-17T18:45:13.6424998Z msg = Cannot scrape callname=ModelReportVisualizer.generate_histogram_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=646. 2025-03-17T18:45:13.6426325Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.6426795Z 2025-03-17T18:45:13.6427085Z Takes in a feature and optional module_filter and plots the histogram of desired data. 2025-03-17T18:45:13.6427515Z 2025-03-17T18:45:13.6427605Z Note: 2025-03-17T18:45:13.6428017Z Only features in the report that have tensor value data can be viewed as a histogram 2025-03-17T18:45:13.6428697Z If you want to plot a histogram from all the channel values of a specific feature for 2025-03-17T18:45:13.6429353Z a specific model, make sure to specify both the model and the feature properly 2025-03-17T18:45:13.6429991Z in the filters and you should be able to see a distribution of the channel data 2025-03-17T18:45:13.6430376Z 2025-03-17T18:45:13.6430464Z Args: 2025-03-17T18:45:13.6430868Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:13.6431381Z contain this filter substring 2025-03-17T18:45:13.6431774Z Default = "", results in all the features being printed 2025-03-17T18:45:13.6432332Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:13.6433049Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:13.6433663Z num_bins (int, optional): The number of bins to create the histogram with 2025-03-17T18:45:13.6434215Z Default = 10, the values will be split into 10 equal sized bins 2025-03-17T18:45:13.6434525Z 2025-03-17T18:45:13.6434635Z Example Use: 2025-03-17T18:45:13.6434888Z >>> # xdoctest: +SKIP 2025-03-17T18:45:13.6435378Z >>> mod_report_visualizer.generategenerate_histogram_visualization_plot_visualization( 2025-03-17T18:45:13.6435944Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:13.6436300Z ... module_fqn_filter = "block1" 2025-03-17T18:45:13.6436618Z ... ) 2025-03-17T18:45:13.6437267Z # outputs histogram of per_channel_min information for all modules in block1 of model 2025-03-17T18:45:13.6437944Z information is gathered across all channels for all modules in block 1 for the 2025-03-17T18:45:13.6438571Z per_channel_min and is displayed in a histogram of equally sized bins 2025-03-17T18:45:13.6438937Z 2025-03-17T18:45:13.6439201Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.6439593Z 2025-03-17T18:45:13.9242659Z msg = Cannot scrape callname=DeviceMesh.__getitem__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py line=666. 2025-03-17T18:45:13.9243671Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:13.9244065Z 2025-03-17T18:45:13.9244348Z Slice the current DeviceMesh based on the mesh_dim_names given to create a submesh. 2025-03-17T18:45:13.9245323Z The submesh created consists of the dimensions and the communicators indicated by 2025-03-17T18:45:13.9245816Z ``mesh_dim_names`` 2025-03-17T18:45:13.9245984Z 2025-03-17T18:45:13.9246076Z Args: 2025-03-17T18:45:13.9246456Z mesh_dim_names (Union[str, Tuple[str]]): the name or the tuple of names of the 2025-03-17T18:45:13.9247037Z mesh dimension of the DeviceMesh to create the submesh for. 2025-03-17T18:45:13.9247451Z Returns: 2025-03-17T18:45:13.9247707Z A :class:`DeviceMesh` object 2025-03-17T18:45:13.9247924Z 2025-03-17T18:45:13.9248222Z The following program runs on each process/rank in an SPMD manner in a world size of 8. 2025-03-17T18:45:13.9248752Z In the first example: 2025-03-17T18:45:13.9249193Z Calling mesh_2d["tp"] on rank 0, 1, 2, 3 returns a 1D submesh of DeviceMesh:([0, 1, 2, 3]). 2025-03-17T18:45:13.9249840Z Calling mesh_2d["tp"] on rank 4, 5, 6, 7 returns a 1D submesh of DeviceMesh:([4, 5, 6, 7]). 2025-03-17T18:45:13.9250464Z Calling mesh_2d["dp"] on rank 0, 4 returns a 1D submesh of DeviceMesh:([0, 4]). 2025-03-17T18:45:13.9251062Z Calling mesh_2d["dp"] on rank 1, 5 returns a 1D submesh of DeviceMesh:([1, 5]). 2025-03-17T18:45:13.9251658Z Calling mesh_2d["dp"] on rank 2, 6 returns a 1D submesh of DeviceMesh:([2, 6]). 2025-03-17T18:45:13.9252259Z Calling mesh_2d["dp"] on rank 3, 7 returns a 1D submesh of DeviceMesh:([3, 7]). 2025-03-17T18:45:13.9252621Z 2025-03-17T18:45:13.9252730Z In the second example: 2025-03-17T18:45:13.9253185Z Calling mesh_3d["dp", "cp"] on rank 0, 1, 4, 5 returns a 2D submesh of DeviceMesh:([[0, 1], [4, 5]]). 2025-03-17T18:45:13.9253855Z Calling mesh_3d["dp", "cp"] on rank 2, 3, 6, 7 returns a 2D submesh of DeviceMesh:([[2, 3], [6, 7]]). 2025-03-17T18:45:13.9254522Z Calling mesh_3d["cp", "dp"] on rank 0, 1, 4, 5 returns a 2D submesh of DeviceMesh:([[0, 4], [1, 5]]). 2025-03-17T18:45:13.9255189Z Calling mesh_3d["cp", "dp"] on rank 2, 3, 6, 7 returns a 2D submesh of DeviceMesh:([[2, 6], [3, 7]]). 2025-03-17T18:45:13.9255583Z 2025-03-17T18:45:13.9255715Z Example:: 2025-03-17T18:45:13.9255963Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:13.9256360Z >>> from torch.distributed.device_mesh import DeviceMesh 2025-03-17T18:45:13.9256749Z >>> 2025-03-17T18:45:13.9257086Z >>> # Initialize a 2D device mesh as (2, 4) to represent the topology 2025-03-17T18:45:13.9257675Z >>> # of cross-host(dim 0), and within-host (dim 1). 2025-03-17T18:45:13.9258206Z >>> mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp")) 2025-03-17T18:45:13.9258707Z >>> tp_mesh = mesh_2d["tp"] 2025-03-17T18:45:13.9259003Z >>> dp_mesh = mesh_2d["dp"] 2025-03-17T18:45:13.9259296Z >>> 2025-03-17T18:45:13.9259533Z >>> # Initialize a 3D mesh. 2025-03-17T18:45:13.9260021Z >>> mesh_3d = init_device_mesh(device_type="cuda", (2,2,2), mesh_dim_names=("dp", "pp", "cp")) 2025-03-17T18:45:13.9260744Z >>> # The order of the mesh_dim_names provided deteremines the order of dimensions in the submesh. 2025-03-17T18:45:13.9261314Z >>> dp_cp_mesh = mesh_3d["dp", "cp"] 2025-03-17T18:45:13.9261662Z >>> cp_dp_mesh = mesh_3d["cp", "dp"] 2025-03-17T18:45:13.9261897Z 2025-03-17T18:45:13.9262585Z Original Error: SyntaxError('positional argument follows keyword argument', ('', 6, 82, 'mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp"))\n', 6, 83)) 2025-03-17T18:45:13.9263393Z 2025-03-17T18:45:13.9263643Z mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp")) 2025-03-17T18:45:13.9264149Z ^ 2025-03-17T18:45:13.9567934Z msg = Cannot scrape callname=batch_isend_irecv in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=2604. 2025-03-17T18:45:13.9568958Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9569535Z 2025-03-17T18:45:13.9569797Z Send or Receive a batch of tensors asynchronously and return a list of requests. 2025-03-17T18:45:13.9570190Z 2025-03-17T18:45:13.9570447Z Process each of the operations in ``p2p_op_list`` and return the corresponding 2025-03-17T18:45:13.9571036Z requests. NCCL, Gloo, and UCC backend are currently supported. 2025-03-17T18:45:13.9571369Z 2025-03-17T18:45:13.9571466Z Args: 2025-03-17T18:45:13.9571833Z p2p_op_list: A list of point-to-point operations(type of each operator is 2025-03-17T18:45:13.9572429Z ``torch.distributed.P2POp``). The order of the isend/irecv in the list 2025-03-17T18:45:13.9573004Z matters and it needs to match with corresponding isend/irecv on the 2025-03-17T18:45:13.9573445Z remote end. 2025-03-17T18:45:13.9573611Z 2025-03-17T18:45:13.9573703Z Returns: 2025-03-17T18:45:13.9574092Z A list of distributed request objects returned by calling the corresponding 2025-03-17T18:45:13.9574782Z op in the op_list. 2025-03-17T18:45:13.9574970Z 2025-03-17T18:45:13.9575065Z Examples: 2025-03-17T18:45:13.9575327Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:13.9576075Z >>> send_tensor = torch.arange(2, dtype=torch.float32) + 2 * rank 2025-03-17T18:45:13.9576654Z >>> recv_tensor = torch.randn(2, dtype=torch.float32) 2025-03-17T18:45:13.9577163Z >>> send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1) % world_size) 2025-03-17T18:45:13.9577623Z >>> recv_op = dist.P2POp( 2025-03-17T18:45:13.9578024Z ... dist.irecv, recv_tensor, (rank - 1 + world_size) % world_size 2025-03-17T18:45:13.9578440Z ... ) 2025-03-17T18:45:13.9578713Z >>> reqs = batch_isend_irecv([send_op, recv_op]) 2025-03-17T18:45:13.9579077Z >>> for req in reqs: 2025-03-17T18:45:13.9579338Z >>> req.wait() 2025-03-17T18:45:13.9579640Z >>> recv_tensor 2025-03-17T18:45:13.9579896Z tensor([2, 3]) # Rank 0 2025-03-17T18:45:13.9580194Z tensor([0, 1]) # Rank 1 2025-03-17T18:45:13.9580380Z 2025-03-17T18:45:13.9580670Z .. note:: Note that when this API is used with the NCCL PG backend, users must set 2025-03-17T18:45:13.9581277Z the current GPU device with `torch.cuda.set_device`, otherwise it will 2025-03-17T18:45:13.9581736Z lead to unexpected hang issues. 2025-03-17T18:45:13.9581965Z 2025-03-17T18:45:13.9582182Z In addition, if this API is the first collective call in the ``group`` 2025-03-17T18:45:13.9582882Z passed to ``dist.P2POp``, all ranks of the ``group`` must participate in 2025-03-17T18:45:13.9583470Z this API call; otherwise, the behavior is undefined. If this API call is 2025-03-17T18:45:13.9584056Z not the first collective call in the ``group``, batched P2P operations 2025-03-17T18:45:13.9584615Z involving only a subset of ranks of the ``group`` are allowed. 2025-03-17T18:45:13.9584945Z 2025-03-17T18:45:13.9585203Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9585599Z 2025-03-17T18:45:13.9586181Z msg = Cannot scrape callname=all_reduce in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=2734. 2025-03-17T18:45:13.9587246Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9587654Z 2025-03-17T18:45:13.9587931Z Reduces the tensor data across all machines in a way that all get the final result. 2025-03-17T18:45:13.9588348Z 2025-03-17T18:45:13.9588583Z After the call ``tensor`` is going to be bitwise identical in all processes. 2025-03-17T18:45:13.9588952Z 2025-03-17T18:45:13.9589069Z Complex tensors are supported. 2025-03-17T18:45:13.9589281Z 2025-03-17T18:45:13.9589370Z Args: 2025-03-17T18:45:13.9589709Z tensor (Tensor): Input and output of the collective. The function 2025-03-17T18:45:13.9590153Z operates in-place. 2025-03-17T18:45:13.9590464Z op (optional): One of the values from 2025-03-17T18:45:13.9590827Z ``torch.distributed.ReduceOp`` 2025-03-17T18:45:13.9591296Z enum. Specifies an operation used for element-wise reductions. 2025-03-17T18:45:13.9592307Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:13.9593055Z the default process group will be used. 2025-03-17T18:45:13.9593850Z async_op (bool, optional): Whether this op should be an async op 2025-03-17T18:45:13.9594171Z 2025-03-17T18:45:13.9594280Z Returns: 2025-03-17T18:45:13.9594567Z Async work handle, if async_op is set to True. 2025-03-17T18:45:13.9594990Z None, if not async_op or if not part of the group 2025-03-17T18:45:13.9595256Z 2025-03-17T18:45:13.9595366Z Examples: 2025-03-17T18:45:13.9595620Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:13.9595985Z >>> # All tensors below are of torch.int64 type. 2025-03-17T18:45:13.9596379Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:13.9596749Z >>> device = torch.device(f"cuda:{rank}") 2025-03-17T18:45:13.9597231Z >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank 2025-03-17T18:45:13.9597683Z >>> tensor 2025-03-17T18:45:13.9597953Z tensor([1, 2], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9598309Z tensor([3, 4], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9598681Z >>> dist.all_reduce(tensor, op=ReduceOp.SUM) 2025-03-17T18:45:13.9599033Z >>> tensor 2025-03-17T18:45:13.9599296Z tensor([4, 6], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9599654Z tensor([4, 6], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9599895Z 2025-03-17T18:45:13.9600041Z >>> # All tensors below are of torch.cfloat type. 2025-03-17T18:45:13.9600424Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:13.9616498Z >>> tensor = torch.tensor( 2025-03-17T18:45:13.9616986Z ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device 2025-03-17T18:45:13.9617378Z ... ) + 2 * rank * (1 + 1j) 2025-03-17T18:45:13.9617674Z >>> tensor 2025-03-17T18:45:13.9617956Z tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9618390Z tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9618798Z >>> dist.all_reduce(tensor, op=ReduceOp.SUM) 2025-03-17T18:45:13.9619150Z >>> tensor 2025-03-17T18:45:13.9619441Z tensor([4.+4.j, 6.+6.j], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9619841Z tensor([4.+4.j, 6.+6.j], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9620112Z 2025-03-17T18:45:13.9620234Z 2025-03-17T18:45:13.9620497Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9620894Z 2025-03-17T18:45:13.9621532Z msg = Cannot scrape callname=gather_object in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=3090. 2025-03-17T18:45:13.9622531Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9622938Z 2025-03-17T18:45:13.9623170Z Gathers picklable objects from the whole group in a single process. 2025-03-17T18:45:13.9623534Z 2025-03-17T18:45:13.9623783Z Similar to :func:`gather`, but Python objects can be passed in. Note that the 2025-03-17T18:45:13.9624305Z object must be picklable in order to be gathered. 2025-03-17T18:45:13.9624582Z 2025-03-17T18:45:13.9624672Z Args: 2025-03-17T18:45:13.9624939Z obj (Any): Input object. Must be picklable. 2025-03-17T18:45:13.9625413Z object_gather_list (list[Any]): Output list. On the ``dst`` rank, it 2025-03-17T18:45:13.9625953Z should be correctly sized as the size of the group for this 2025-03-17T18:45:13.9626586Z collective and will contain the output. Must be ``None`` on non-dst 2025-03-17T18:45:13.9627055Z ranks. (default is ``None``) 2025-03-17T18:45:13.9627610Z dst (int, optional): Destination rank on global process group (regardless of ``group`` argument). 2025-03-17T18:45:13.9628269Z (If both ``dst`` and ``group_dst`` are None, default is global rank 0) 2025-03-17T18:45:13.9628841Z group: (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:13.9629470Z the default process group will be used. Default is ``None``. 2025-03-17T18:45:13.9630146Z group_dst (int, optional): Destination rank on ``group``. Invalid to specify both ``dst`` and ``group_dst`` 2025-03-17T18:45:13.9630616Z 2025-03-17T18:45:13.9630722Z Returns: 2025-03-17T18:45:13.9631049Z None. On the ``dst`` rank, ``object_gather_list`` will contain the 2025-03-17T18:45:13.9631477Z output of the collective. 2025-03-17T18:45:13.9631734Z 2025-03-17T18:45:13.9632114Z .. note:: Note that this API differs slightly from the gather collective 2025-03-17T18:45:13.9632708Z since it does not provide an async_op handle and thus will be a blocking 2025-03-17T18:45:13.9633418Z call. 2025-03-17T18:45:13.9633654Z 2025-03-17T18:45:13.9634043Z .. note:: For NCCL-based processed groups, internal tensor representations 2025-03-17T18:45:13.9634626Z of objects must be moved to the GPU device before communication takes 2025-03-17T18:45:13.9635120Z place. In this case, the device used is given by 2025-03-17T18:45:13.9635624Z ``torch.cuda.current_device()`` and it is the user's responsiblity to 2025-03-17T18:45:13.9636192Z ensure that this is set so that each rank has an individual GPU, via 2025-03-17T18:45:13.9636648Z ``torch.cuda.set_device()``. 2025-03-17T18:45:13.9637055Z 2025-03-17T18:45:13.9637169Z .. warning:: 2025-03-17T18:45:13.9637513Z :func:`gather_object` uses ``pickle`` module implicitly, which is 2025-03-17T18:45:13.9638074Z known to be insecure. It is possible to construct malicious pickle data 2025-03-17T18:45:13.9638650Z which will execute arbitrary code during unpickling. Only call this 2025-03-17T18:45:13.9639120Z function with data you trust. 2025-03-17T18:45:13.9639342Z 2025-03-17T18:45:13.9639439Z .. warning:: 2025-03-17T18:45:13.9639809Z Calling :func:`gather_object` with GPU tensors is not well supported 2025-03-17T18:45:13.9640393Z and inefficient as it incurs GPU -> CPU transfer since tensors would be 2025-03-17T18:45:13.9640932Z pickled. Please consider using :func:`gather` instead. 2025-03-17T18:45:13.9641232Z 2025-03-17T18:45:13.9641326Z Example:: 2025-03-17T18:45:13.9641605Z >>> # xdoctest: +SKIP("need process group init") 2025-03-17T18:45:13.9642051Z >>> # Note: Process group initialization omitted on each rank. 2025-03-17T18:45:13.9642485Z >>> import torch.distributed as dist 2025-03-17T18:45:13.9642981Z >>> # Assumes world_size of 3. 2025-03-17T18:45:13.9643383Z >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object 2025-03-17T18:45:13.9643820Z >>> output = [None for _ in gather_objects] 2025-03-17T18:45:13.9644177Z >>> dist.gather_object( 2025-03-17T18:45:13.9644499Z ... gather_objects[dist.get_rank()], 2025-03-17T18:45:13.9644882Z ... output if dist.get_rank() == 0 else None, 2025-03-17T18:45:13.9645236Z ... dst=0 2025-03-17T18:45:13.9645479Z ... ) 2025-03-17T18:45:13.9645694Z >>> # On rank 0 2025-03-17T18:45:13.9645947Z >>> output 2025-03-17T18:45:13.9646190Z ['foo', 12, {1: 2}] 2025-03-17T18:45:13.9646361Z 2025-03-17T18:45:13.9646620Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9647013Z 2025-03-17T18:45:13.9647622Z msg = Cannot scrape callname=all_gather in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=3666. 2025-03-17T18:45:13.9648610Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9649002Z 2025-03-17T18:45:13.9649167Z Gathers tensors from the whole group in a list. 2025-03-17T18:45:13.9649425Z 2025-03-17T18:45:13.9649576Z Complex and uneven sized tensors are supported. 2025-03-17T18:45:13.9649851Z 2025-03-17T18:45:13.9649947Z Args: 2025-03-17T18:45:13.9650260Z tensor_list (list[Tensor]): Output list. It should contain 2025-03-17T18:45:13.9650792Z correctly-sized tensors to be used for output of the collective. 2025-03-17T18:45:13.9651353Z Uneven sized tensors are supported. 2025-03-17T18:45:13.9651799Z tensor (Tensor): Tensor to be broadcast from current process. 2025-03-17T18:45:13.9652363Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:13.9652865Z the default process group will be used. 2025-03-17T18:45:13.9653326Z async_op (bool, optional): Whether this op should be an async op 2025-03-17T18:45:13.9653657Z 2025-03-17T18:45:13.9653752Z Returns: 2025-03-17T18:45:13.9654027Z Async work handle, if async_op is set to True. 2025-03-17T18:45:13.9654444Z None, if not async_op or if not part of the group 2025-03-17T18:45:13.9654718Z 2025-03-17T18:45:13.9654811Z Examples: 2025-03-17T18:45:13.9655085Z >>> # xdoctest: +SKIP("need process group init") 2025-03-17T18:45:13.9655488Z >>> # All tensors below are of torch.int64 dtype. 2025-03-17T18:45:13.9655868Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:13.9656229Z >>> device = torch.device(f"cuda:{rank}") 2025-03-17T18:45:13.9656574Z >>> tensor_list = [ 2025-03-17T18:45:13.9656961Z ... torch.zeros(2, dtype=torch.int64, device=device) for _ in range(2) 2025-03-17T18:45:13.9657394Z ... ] 2025-03-17T18:45:13.9657626Z >>> tensor_list 2025-03-17T18:45:13.9657976Z [tensor([0, 0], device='cuda:0'), tensor([0, 0], device='cuda:0')] # Rank 0 2025-03-17T18:45:13.9658511Z [tensor([0, 0], device='cuda:1'), tensor([0, 0], device='cuda:1')] # Rank 1 2025-03-17T18:45:13.9659078Z >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank 2025-03-17T18:45:13.9659528Z >>> tensor 2025-03-17T18:45:13.9659795Z tensor([1, 2], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9660147Z tensor([3, 4], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9660502Z >>> dist.all_gather(tensor_list, tensor) 2025-03-17T18:45:13.9660834Z >>> tensor_list 2025-03-17T18:45:13.9661189Z [tensor([1, 2], device='cuda:0'), tensor([3, 4], device='cuda:0')] # Rank 0 2025-03-17T18:45:13.9661724Z [tensor([1, 2], device='cuda:1'), tensor([3, 4], device='cuda:1')] # Rank 1 2025-03-17T18:45:13.9662057Z 2025-03-17T18:45:13.9662204Z >>> # All tensors below are of torch.cfloat dtype. 2025-03-17T18:45:13.9662593Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:13.9662933Z >>> tensor_list = [ 2025-03-17T18:45:13.9663782Z ... torch.zeros(2, dtype=torch.cfloat, device=device) for _ in range(2) 2025-03-17T18:45:13.9664232Z ... ] 2025-03-17T18:45:13.9664465Z >>> tensor_list 2025-03-17T18:45:13.9664882Z [tensor([0.+0.j, 0.+0.j], device='cuda:0'), tensor([0.+0.j, 0.+0.j], device='cuda:0')] # Rank 0 2025-03-17T18:45:13.9665510Z [tensor([0.+0.j, 0.+0.j], device='cuda:1'), tensor([0.+0.j, 0.+0.j], device='cuda:1')] # Rank 1 2025-03-17T18:45:13.9665998Z >>> tensor = torch.tensor( 2025-03-17T18:45:13.9666348Z ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device 2025-03-17T18:45:13.9666837Z ... ) + 2 * rank * (1 + 1j) 2025-03-17T18:45:13.9667120Z >>> tensor 2025-03-17T18:45:13.9667411Z tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 2025-03-17T18:45:13.9667815Z tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 2025-03-17T18:45:13.9668200Z >>> dist.all_gather(tensor_list, tensor) 2025-03-17T18:45:13.9668537Z >>> tensor_list 2025-03-17T18:45:13.9668952Z [tensor([1.+1.j, 2.+2.j], device='cuda:0'), tensor([3.+3.j, 4.+4.j], device='cuda:0')] # Rank 0 2025-03-17T18:45:13.9669575Z [tensor([1.+1.j, 2.+2.j], device='cuda:1'), tensor([3.+3.j, 4.+4.j], device='cuda:1')] # Rank 1 2025-03-17T18:45:13.9669955Z 2025-03-17T18:45:13.9669959Z 2025-03-17T18:45:13.9670219Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9670610Z 2025-03-17T18:45:13.9692654Z msg = Cannot scrape callname=all_to_all_single in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=4381. 2025-03-17T18:45:13.9693660Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9694177Z 2025-03-17T18:45:13.9694436Z Split input tensor and then scatter the split list to all processes in a group. 2025-03-17T18:45:13.9694828Z 2025-03-17T18:45:13.9695096Z Later the received tensors are concatenated from all the processes in the group 2025-03-17T18:45:13.9695617Z and returned as a single output tensor. 2025-03-17T18:45:13.9695867Z 2025-03-17T18:45:13.9695985Z Complex tensors are supported. 2025-03-17T18:45:13.9696199Z 2025-03-17T18:45:13.9696287Z Args: 2025-03-17T18:45:13.9696588Z output (Tensor): Gathered concatenated output tensor. 2025-03-17T18:45:13.9697011Z input (Tensor): Input tensor to scatter. 2025-03-17T18:45:13.9697483Z output_split_sizes: (list[Int], optional): Output split sizes for dim 0 2025-03-17T18:45:13.9698047Z if specified None or empty, dim 0 of ``output`` tensor must divide 2025-03-17T18:45:13.9698500Z equally by ``world_size``. 2025-03-17T18:45:13.9698935Z input_split_sizes: (list[Int], optional): Input split sizes for dim 0 2025-03-17T18:45:13.9699491Z if specified None or empty, dim 0 of ``input`` tensor must divide 2025-03-17T18:45:13.9699935Z equally by ``world_size``. 2025-03-17T18:45:13.9700388Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:13.9700882Z the default process group will be used. 2025-03-17T18:45:13.9701341Z async_op (bool, optional): Whether this op should be an async op. 2025-03-17T18:45:13.9701659Z 2025-03-17T18:45:13.9701762Z Returns: 2025-03-17T18:45:13.9702039Z Async work handle, if async_op is set to True. 2025-03-17T18:45:13.9702454Z None, if not async_op or if not part of the group. 2025-03-17T18:45:13.9702720Z 2025-03-17T18:45:13.9702835Z .. warning:: 2025-03-17T18:45:13.9703157Z `all_to_all_single` is experimental and subject to change. 2025-03-17T18:45:13.9703578Z 2025-03-17T18:45:13.9703743Z Examples: 2025-03-17T18:45:13.9704077Z >>> # xdoctest: +SKIP("Undefined rank") 2025-03-17T18:45:13.9704608Z >>> input = torch.arange(4) + rank * 4 2025-03-17T18:45:13.9705086Z >>> input 2025-03-17T18:45:13.9705389Z tensor([0, 1, 2, 3]) # Rank 0 2025-03-17T18:45:13.9705715Z tensor([4, 5, 6, 7]) # Rank 1 2025-03-17T18:45:13.9706037Z tensor([8, 9, 10, 11]) # Rank 2 2025-03-17T18:45:13.9706356Z tensor([12, 13, 14, 15]) # Rank 3 2025-03-17T18:45:13.9706881Z >>> output = torch.empty([4], dtype=torch.int64) 2025-03-17T18:45:13.9707281Z >>> dist.all_to_all_single(output, input) 2025-03-17T18:45:13.9707625Z >>> output 2025-03-17T18:45:13.9707878Z tensor([0, 4, 8, 12]) # Rank 0 2025-03-17T18:45:13.9708201Z tensor([1, 5, 9, 13]) # Rank 1 2025-03-17T18:45:13.9708519Z tensor([2, 6, 10, 14]) # Rank 2 2025-03-17T18:45:13.9708834Z tensor([3, 7, 11, 15]) # Rank 3 2025-03-17T18:45:13.9709038Z 2025-03-17T18:45:13.9709221Z >>> # Essentially, it is similar to following operation: 2025-03-17T18:45:13.9709658Z >>> scatter_list = list(input.chunk(world_size)) 2025-03-17T18:45:13.9710069Z >>> gather_list = list(output.chunk(world_size)) 2025-03-17T18:45:13.9710446Z >>> for i in range(world_size): 2025-03-17T18:45:13.9710893Z >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src = i) 2025-03-17T18:45:13.9711255Z 2025-03-17T18:45:13.9711390Z >>> # Another example with uneven split 2025-03-17T18:45:13.9711819Z >>> input 2025-03-17T18:45:13.9712118Z tensor([0, 1, 2, 3, 4, 5]) # Rank 0 2025-03-17T18:45:13.9712571Z tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 2025-03-17T18:45:13.9713025Z tensor([20, 21, 22, 23, 24]) # Rank 2 2025-03-17T18:45:13.9713475Z tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 2025-03-17T18:45:13.9713908Z >>> input_splits 2025-03-17T18:45:13.9714203Z [2, 2, 1, 1] # Rank 0 2025-03-17T18:45:13.9714677Z [3, 2, 2, 2] # Rank 1 2025-03-17T18:45:13.9715062Z [2, 1, 1, 1] # Rank 2 2025-03-17T18:45:13.9715446Z [2, 2, 2, 1] # Rank 3 2025-03-17T18:45:13.9715880Z >>> output_splits 2025-03-17T18:45:13.9716277Z [2, 3, 2, 2] # Rank 0 2025-03-17T18:45:13.9716659Z [2, 2, 1, 2] # Rank 1 2025-03-17T18:45:13.9717039Z [1, 2, 1, 2] # Rank 2 2025-03-17T18:45:13.9717417Z [1, 2, 1, 1] # Rank 3 2025-03-17T18:45:13.9717768Z >>> output = ... 2025-03-17T18:45:13.9718142Z >>> dist.all_to_all_single(output, input, output_splits, input_splits) 2025-03-17T18:45:13.9718571Z >>> output 2025-03-17T18:45:13.9718875Z tensor([ 0, 1, 10, 11, 12, 20, 21, 30, 31]) # Rank 0 2025-03-17T18:45:13.9719328Z tensor([ 2, 3, 13, 14, 22, 32, 33]) # Rank 1 2025-03-17T18:45:13.9719780Z tensor([ 4, 15, 16, 23, 34, 35]) # Rank 2 2025-03-17T18:45:13.9720233Z tensor([ 5, 17, 18, 24, 36]) # Rank 3 2025-03-17T18:45:13.9720526Z 2025-03-17T18:45:13.9720530Z 2025-03-17T18:45:13.9720699Z >>> # Another example with tensors of torch.cfloat type. 2025-03-17T18:45:13.9721097Z >>> input = torch.tensor( 2025-03-17T18:45:13.9721438Z ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat 2025-03-17T18:45:13.9721808Z ... ) + 4 * rank * (1 + 1j) 2025-03-17T18:45:13.9722097Z >>> input 2025-03-17T18:45:13.9722418Z tensor([1+1j, 2+2j, 3+3j, 4+4j]) # Rank 0 2025-03-17T18:45:13.9722891Z tensor([5+5j, 6+6j, 7+7j, 8+8j]) # Rank 1 2025-03-17T18:45:13.9723385Z tensor([9+9j, 10+10j, 11+11j, 12+12j]) # Rank 2 2025-03-17T18:45:13.9723886Z tensor([13+13j, 14+14j, 15+15j, 16+16j]) # Rank 3 2025-03-17T18:45:13.9724350Z >>> output = torch.empty([4], dtype=torch.int64) 2025-03-17T18:45:13.9724741Z >>> dist.all_to_all_single(output, input) 2025-03-17T18:45:13.9725155Z >>> output 2025-03-17T18:45:13.9725486Z tensor([1+1j, 5+5j, 9+9j, 13+13j]) # Rank 0 2025-03-17T18:45:13.9725976Z tensor([2+2j, 6+6j, 10+10j, 14+14j]) # Rank 1 2025-03-17T18:45:13.9726469Z tensor([3+3j, 7+7j, 11+11j, 15+15j]) # Rank 2 2025-03-17T18:45:13.9726961Z tensor([4+4j, 8+8j, 12+12j, 16+16j]) # Rank 3 2025-03-17T18:45:13.9727263Z 2025-03-17T18:45:13.9727536Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9727914Z 2025-03-17T18:45:13.9728512Z msg = Cannot scrape callname=all_to_all in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=4523. 2025-03-17T18:45:13.9729478Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9729863Z 2025-03-17T18:45:13.9730251Z Scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. 2025-03-17T18:45:13.9730745Z 2025-03-17T18:45:13.9730874Z Complex tensors are supported. 2025-03-17T18:45:13.9731074Z 2025-03-17T18:45:13.9731178Z Args: 2025-03-17T18:45:13.9731533Z output_tensor_list (list[Tensor]): List of tensors to be gathered one 2025-03-17T18:45:13.9731978Z per rank. 2025-03-17T18:45:13.9732370Z input_tensor_list (list[Tensor]): List of tensors to scatter one per rank. 2025-03-17T18:45:13.9732971Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:13.9733467Z the default process group will be used. 2025-03-17T18:45:13.9733981Z async_op (bool, optional): Whether this op should be an async op. 2025-03-17T18:45:13.9734301Z 2025-03-17T18:45:13.9734406Z Returns: 2025-03-17T18:45:13.9734672Z Async work handle, if async_op is set to True. 2025-03-17T18:45:13.9735096Z None, if not async_op or if not part of the group. 2025-03-17T18:45:13.9735373Z 2025-03-17T18:45:13.9735482Z .. warning:: 2025-03-17T18:45:13.9735786Z `all_to_all` is experimental and subject to change. 2025-03-17T18:45:13.9736066Z 2025-03-17T18:45:13.9736159Z Examples: 2025-03-17T18:45:13.9736417Z >>> # xdoctest: +SKIP("Undefined rank") 2025-03-17T18:45:13.9736968Z >>> input = torch.arange(4) + rank * 4 2025-03-17T18:45:13.9737325Z >>> input = list(input.chunk(4)) 2025-03-17T18:45:13.9737642Z >>> input 2025-03-17T18:45:13.9737962Z [tensor([0]), tensor([1]), tensor([2]), tensor([3])] # Rank 0 2025-03-17T18:45:13.9738431Z [tensor([4]), tensor([5]), tensor([6]), tensor([7])] # Rank 1 2025-03-17T18:45:13.9738906Z [tensor([8]), tensor([9]), tensor([10]), tensor([11])] # Rank 2 2025-03-17T18:45:13.9739376Z [tensor([12]), tensor([13]), tensor([14]), tensor([15])] # Rank 3 2025-03-17T18:45:13.9739866Z >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) 2025-03-17T18:45:13.9740302Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:13.9740625Z >>> output 2025-03-17T18:45:13.9740948Z [tensor([0]), tensor([4]), tensor([8]), tensor([12])] # Rank 0 2025-03-17T18:45:13.9741418Z [tensor([1]), tensor([5]), tensor([9]), tensor([13])] # Rank 1 2025-03-17T18:45:13.9741888Z [tensor([2]), tensor([6]), tensor([10]), tensor([14])] # Rank 2 2025-03-17T18:45:13.9742356Z [tensor([3]), tensor([7]), tensor([11]), tensor([15])] # Rank 3 2025-03-17T18:45:13.9742645Z 2025-03-17T18:45:13.9742827Z >>> # Essentially, it is similar to following operation: 2025-03-17T18:45:13.9743226Z >>> scatter_list = input 2025-03-17T18:45:13.9743518Z >>> gather_list = output 2025-03-17T18:45:13.9743831Z >>> for i in range(world_size): 2025-03-17T18:45:13.9744285Z >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src=i) 2025-03-17T18:45:13.9744650Z 2025-03-17T18:45:13.9744740Z >>> input 2025-03-17T18:45:13.9745048Z tensor([0, 1, 2, 3, 4, 5]) # Rank 0 2025-03-17T18:45:13.9745616Z tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 2025-03-17T18:45:13.9746075Z tensor([20, 21, 22, 23, 24]) # Rank 2 2025-03-17T18:45:13.9746593Z tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 2025-03-17T18:45:13.9746991Z >>> input_splits 2025-03-17T18:45:13.9747286Z [2, 2, 1, 1] # Rank 0 2025-03-17T18:45:13.9747670Z [3, 2, 2, 2] # Rank 1 2025-03-17T18:45:13.9748050Z [2, 1, 1, 1] # Rank 2 2025-03-17T18:45:13.9748432Z [2, 2, 2, 1] # Rank 3 2025-03-17T18:45:13.9748784Z >>> output_splits 2025-03-17T18:45:13.9749074Z [2, 3, 2, 2] # Rank 0 2025-03-17T18:45:13.9749454Z [2, 2, 1, 2] # Rank 1 2025-03-17T18:45:13.9749833Z [1, 2, 1, 2] # Rank 2 2025-03-17T18:45:13.9750218Z [1, 2, 1, 1] # Rank 3 2025-03-17T18:45:13.9750610Z >>> input = list(input.split(input_splits)) 2025-03-17T18:45:13.9750960Z >>> input 2025-03-17T18:45:13.9751314Z [tensor([0, 1]), tensor([2, 3]), tensor([4]), tensor([5])] # Rank 0 2025-03-17T18:45:13.9751866Z [tensor([10, 11, 12]), tensor([13, 14]), tensor([15, 16]), tensor([17, 18])] # Rank 1 2025-03-17T18:45:13.9752490Z [tensor([20, 21]), tensor([22]), tensor([23]), tensor([24])] # Rank 2 2025-03-17T18:45:13.9753038Z [tensor([30, 31]), tensor([32, 33]), tensor([34, 35]), tensor([36])] # Rank 3 2025-03-17T18:45:13.9753479Z >>> output = ... 2025-03-17T18:45:13.9753754Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:13.9754078Z >>> output 2025-03-17T18:45:13.9754431Z [tensor([0, 1]), tensor([10, 11, 12]), tensor([20, 21]), tensor([30, 31])] # Rank 0 2025-03-17T18:45:13.9754974Z [tensor([2, 3]), tensor([13, 14]), tensor([22]), tensor([32, 33])] # Rank 1 2025-03-17T18:45:13.9755519Z [tensor([4]), tensor([15, 16]), tensor([23]), tensor([34, 35])] # Rank 2 2025-03-17T18:45:13.9756067Z [tensor([5]), tensor([17, 18]), tensor([24]), tensor([36])] # Rank 3 2025-03-17T18:45:13.9756412Z 2025-03-17T18:45:13.9756580Z >>> # Another example with tensors of torch.cfloat type. 2025-03-17T18:45:13.9756978Z >>> input = torch.tensor( 2025-03-17T18:45:13.9757420Z ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat 2025-03-17T18:45:13.9757964Z ... ) + 4 * rank * (1 + 1j) 2025-03-17T18:45:13.9758470Z >>> input = list(input.chunk(4)) 2025-03-17T18:45:13.9758962Z >>> input 2025-03-17T18:45:13.9759503Z [tensor([1+1j]), tensor([2+2j]), tensor([3+3j]), tensor([4+4j])] # Rank 0 2025-03-17T18:45:13.9760468Z [tensor([5+5j]), tensor([6+6j]), tensor([7+7j]), tensor([8+8j])] # Rank 1 2025-03-17T18:45:13.9761035Z [tensor([9+9j]), tensor([10+10j]), tensor([11+11j]), tensor([12+12j])] # Rank 2 2025-03-17T18:45:13.9761612Z [tensor([13+13j]), tensor([14+14j]), tensor([15+15j]), tensor([16+16j])] # Rank 3 2025-03-17T18:45:13.9762161Z >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) 2025-03-17T18:45:13.9762593Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:13.9762915Z >>> output 2025-03-17T18:45:13.9763275Z [tensor([1+1j]), tensor([5+5j]), tensor([9+9j]), tensor([13+13j])] # Rank 0 2025-03-17T18:45:13.9763840Z [tensor([2+2j]), tensor([6+6j]), tensor([10+10j]), tensor([14+14j])] # Rank 1 2025-03-17T18:45:13.9764399Z [tensor([3+3j]), tensor([7+7j]), tensor([11+11j]), tensor([15+15j])] # Rank 2 2025-03-17T18:45:13.9764957Z [tensor([4+4j]), tensor([8+8j]), tensor([12+12j]), tensor([16+16j])] # Rank 3 2025-03-17T18:45:13.9765384Z 2025-03-17T18:45:13.9765390Z 2025-03-17T18:45:13.9765668Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9766047Z 2025-03-17T18:45:13.9766604Z msg = Cannot scrape callname=__doc__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/launch.py line=2. 2025-03-17T18:45:13.9767495Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:13.9767898Z 2025-03-17T18:45:13.9768023Z Module ``torch.distributed.launch``. 2025-03-17T18:45:13.9768262Z 2025-03-17T18:45:13.9768517Z ``torch.distributed.launch`` is a module that spawns up multiple distributed 2025-03-17T18:45:13.9769060Z training processes on each of the training nodes. 2025-03-17T18:45:13.9769338Z 2025-03-17T18:45:13.9769438Z .. warning:: 2025-03-17T18:45:13.9769585Z 2025-03-17T18:45:13.9769847Z This module is going to be deprecated in favor of :ref:`torchrun `. 2025-03-17T18:45:13.9770243Z 2025-03-17T18:45:13.9770492Z The utility can be used for single-node distributed training, in which one or 2025-03-17T18:45:13.9771109Z more processes per node will be spawned. The utility can be used for either 2025-03-17T18:45:13.9771705Z CPU training or GPU training. If the utility is used for GPU training, 2025-03-17T18:45:13.9772312Z each distributed process will be operating on a single GPU. This can achieve 2025-03-17T18:45:13.9772937Z well-improved single-node training performance. It can also be used in 2025-03-17T18:45:13.9773578Z multi-node distributed training, by spawning up multiple processes on each node 2025-03-17T18:45:13.9774285Z for well-improved multi-node distributed training performance as well. 2025-03-17T18:45:13.9774891Z This will especially be beneficial for systems with multiple Infiniband 2025-03-17T18:45:13.9775526Z interfaces that have direct-GPU support, since all of them can be utilized for 2025-03-17T18:45:13.9776040Z aggregated communication bandwidth. 2025-03-17T18:45:13.9776263Z 2025-03-17T18:45:13.9776522Z In both cases of single-node distributed training or multi-node distributed 2025-03-17T18:45:13.9777137Z training, this utility will launch the given number of processes per node 2025-03-17T18:45:13.9777738Z (``--nproc-per-node``). If used for GPU training, this number needs to be less 2025-03-17T18:45:13.9778325Z or equal to the number of GPUs on the current system (``nproc_per_node``), 2025-03-17T18:45:13.9778886Z and each process will be operating on a single GPU from *GPU 0 to 2025-03-17T18:45:13.9779333Z GPU (nproc_per_node - 1)*. 2025-03-17T18:45:13.9779522Z 2025-03-17T18:45:13.9779647Z **How to use this module:** 2025-03-17T18:45:13.9779831Z 2025-03-17T18:45:13.9780004Z 1. Single-Node multi-process distributed training 2025-03-17T18:45:13.9780273Z 2025-03-17T18:45:13.9780379Z :: 2025-03-17T18:45:13.9780496Z 2025-03-17T18:45:13.9780757Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:13.9781330Z YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other 2025-03-17T18:45:13.9781783Z arguments of your training script) 2025-03-17T18:45:13.9782020Z 2025-03-17T18:45:13.9782249Z 2. Multi-Node multi-process distributed training: (e.g. two nodes) 2025-03-17T18:45:13.9782580Z 2025-03-17T18:45:13.9782584Z 2025-03-17T18:45:13.9782743Z Node 1: *(IP: 192.168.1.1, and has a free port: 1234)* 2025-03-17T18:45:13.9783002Z 2025-03-17T18:45:13.9783105Z :: 2025-03-17T18:45:13.9783219Z 2025-03-17T18:45:13.9783473Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:13.9784004Z --nnodes=2 --node-rank=0 --master-addr="192.168.1.1" 2025-03-17T18:45:13.9784500Z --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 2025-03-17T18:45:13.9784999Z and all other arguments of your training script) 2025-03-17T18:45:13.9785287Z 2025-03-17T18:45:13.9785378Z Node 2: 2025-03-17T18:45:13.9785514Z 2025-03-17T18:45:13.9785602Z :: 2025-03-17T18:45:13.9785809Z 2025-03-17T18:45:13.9786051Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:13.9786680Z --nnodes=2 --node-rank=1 --master-addr="192.168.1.1" 2025-03-17T18:45:13.9787180Z --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 2025-03-17T18:45:13.9787675Z and all other arguments of your training script) 2025-03-17T18:45:13.9787957Z 2025-03-17T18:45:13.9788123Z 3. To look up what optional arguments this module offers: 2025-03-17T18:45:13.9788415Z 2025-03-17T18:45:13.9788503Z :: 2025-03-17T18:45:13.9788636Z 2025-03-17T18:45:13.9788775Z python -m torch.distributed.launch --help 2025-03-17T18:45:13.9789039Z 2025-03-17T18:45:13.9789043Z 2025-03-17T18:45:13.9789148Z **Important Notices:** 2025-03-17T18:45:13.9789323Z 2025-03-17T18:45:13.9789514Z 1. This utility and multi-process distributed (single-node or 2025-03-17T18:45:13.9790085Z multi-node) GPU training currently only achieves the best performance using 2025-03-17T18:45:13.9790723Z the NCCL distributed backend. Thus NCCL backend is the recommended backend to 2025-03-17T18:45:13.9791212Z use for GPU training. 2025-03-17T18:45:13.9791384Z 2025-03-17T18:45:13.9791608Z 2. In your training program, you must parse the command-line argument: 2025-03-17T18:45:13.9792189Z ``--local-rank=LOCAL_PROCESS_RANK``, which will be provided by this module. 2025-03-17T18:45:13.9792782Z If your training program uses GPUs, you should ensure that your code only 2025-03-17T18:45:13.9793340Z runs on the GPU device of LOCAL_PROCESS_RANK. This can be done by: 2025-03-17T18:45:13.9793723Z 2025-03-17T18:45:13.9793841Z Parsing the local_rank argument 2025-03-17T18:45:13.9794038Z 2025-03-17T18:45:13.9794134Z :: 2025-03-17T18:45:13.9794243Z 2025-03-17T18:45:13.9794354Z >>> # xdoctest: +SKIP 2025-03-17T18:45:13.9794628Z >>> import argparse 2025-03-17T18:45:13.9794923Z >>> parser = argparse.ArgumentParser() 2025-03-17T18:45:13.9795369Z >>> parser.add_argument("--local-rank", "--local_rank", type=int) 2025-03-17T18:45:13.9795806Z >>> args = parser.parse_args() 2025-03-17T18:45:13.9796013Z 2025-03-17T18:45:13.9796149Z Set your device to local rank using either 2025-03-17T18:45:13.9796386Z 2025-03-17T18:45:13.9796485Z :: 2025-03-17T18:45:13.9796598Z 2025-03-17T18:45:13.9796810Z >>> torch.cuda.set_device(args.local_rank) # before your code runs 2025-03-17T18:45:13.9797132Z 2025-03-17T18:45:13.9797229Z or 2025-03-17T18:45:13.9797341Z 2025-03-17T18:45:13.9797435Z :: 2025-03-17T18:45:13.9797546Z 2025-03-17T18:45:13.9797683Z >>> with torch.cuda.device(args.local_rank): 2025-03-17T18:45:13.9798028Z >>> # your code to run 2025-03-17T18:45:13.9798301Z >>> ... 2025-03-17T18:45:13.9798441Z 2025-03-17T18:45:13.9798545Z .. versionchanged:: 2.0.0 2025-03-17T18:45:13.9798729Z 2025-03-17T18:45:13.9798976Z The launcher will passes the ``--local-rank=`` argument to your script. 2025-03-17T18:45:13.9799596Z From PyTorch 2.0.0 onwards, the dashed ``--local-rank`` is preferred over the 2025-03-17T18:45:13.9800106Z previously used underscored ``--local_rank``. 2025-03-17T18:45:13.9800370Z 2025-03-17T18:45:13.9800614Z For backward compatibility, it may be necessary for users to handle both 2025-03-17T18:45:13.9801252Z cases in their argument parsing code. This means including both ``"--local-rank"`` 2025-03-17T18:45:13.9801873Z and ``"--local_rank"`` in the argument parser. If only ``"--local_rank"`` is 2025-03-17T18:45:13.9802476Z provided, the launcher will trigger an error: "error: unrecognized arguments: 2025-03-17T18:45:13.9803098Z --local-rank=". For training code that only supports PyTorch 2.0.0+, 2025-03-17T18:45:13.9803607Z including ``"--local-rank"`` should be sufficient. 2025-03-17T18:45:13.9803880Z 2025-03-17T18:45:13.9804120Z 3. In your training program, you are supposed to call the following function 2025-03-17T18:45:13.9804778Z at the beginning to start the distributed backend. It is strongly recommended 2025-03-17T18:45:13.9805380Z that ``init_method=env://``. Other init methods (e.g. ``tcp://``) may work, 2025-03-17T18:45:13.9805919Z but ``env://`` is the one that is officially supported by this module. 2025-03-17T18:45:13.9806245Z 2025-03-17T18:45:13.9806332Z :: 2025-03-17T18:45:13.9806457Z 2025-03-17T18:45:13.9806665Z >>> torch.distributed.init_process_group(backend='YOUR BACKEND', 2025-03-17T18:45:13.9807141Z >>> init_method='env://') 2025-03-17T18:45:13.9807386Z 2025-03-17T18:45:13.9807636Z 4. In your training program, you can either use regular distributed functions 2025-03-17T18:45:13.9808251Z or use :func:`torch.nn.parallel.DistributedDataParallel` module. If your 2025-03-17T18:45:13.9808825Z training program uses GPUs for training and you would like to use 2025-03-17T18:45:13.9809346Z :func:`torch.nn.parallel.DistributedDataParallel` module, 2025-03-17T18:45:13.9809762Z here is how to configure it. 2025-03-17T18:45:13.9809953Z 2025-03-17T18:45:13.9810052Z :: 2025-03-17T18:45:13.9810169Z 2025-03-17T18:45:13.9810374Z >>> model = torch.nn.parallel.DistributedDataParallel(model, 2025-03-17T18:45:13.9810833Z >>> device_ids=[args.local_rank], 2025-03-17T18:45:13.9811237Z >>> output_device=args.local_rank) 2025-03-17T18:45:13.9811497Z 2025-03-17T18:45:13.9811753Z Please ensure that ``device_ids`` argument is set to be the only GPU device id 2025-03-17T18:45:13.9812367Z that your code will be operating on. This is generally the local rank of the 2025-03-17T18:45:13.9813026Z process. In other words, the ``device_ids`` needs to be ``[args.local_rank]``, 2025-03-17T18:45:13.9813608Z and ``output_device`` needs to be ``args.local_rank`` in order to use this 2025-03-17T18:45:13.9814038Z utility 2025-03-17T18:45:13.9814156Z 2025-03-17T18:45:13.9814412Z 5. Another way to pass ``local_rank`` to the subprocesses via environment variable 2025-03-17T18:45:13.9815013Z ``LOCAL_RANK``. This behavior is enabled when you launch the script with 2025-03-17T18:45:13.9815564Z ``--use-env=True``. You must adjust the subprocess example above to replace 2025-03-17T18:45:13.9816105Z ``args.local_rank`` with ``os.environ['LOCAL_RANK']``; the launcher 2025-03-17T18:45:13.9816596Z will not pass ``--local-rank`` when you specify this flag. 2025-03-17T18:45:13.9816893Z 2025-03-17T18:45:13.9816986Z .. warning:: 2025-03-17T18:45:13.9817122Z 2025-03-17T18:45:13.9817327Z ``local_rank`` is NOT globally unique: it is only unique per process 2025-03-17T18:45:13.9817849Z on a machine. Thus, don't use it to decide if you should, e.g., 2025-03-17T18:45:13.9818281Z write to a networked filesystem. See 2025-03-17T18:45:13.9818739Z https://github.com/pytorch/pytorch/issues/12042 for an example of 2025-03-17T18:45:13.9819248Z how things can go wrong if you don't do this correctly. 2025-03-17T18:45:13.9819537Z 2025-03-17T18:45:13.9819541Z 2025-03-17T18:45:13.9819549Z 2025-03-17T18:45:13.9819553Z 2025-03-17T18:45:13.9819809Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:13.9820192Z 2025-03-17T18:45:14.0407289Z msg = Cannot scrape callname=init_from_local_shards in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py line=361. 2025-03-17T18:45:14.0408393Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.0408794Z 2025-03-17T18:45:14.0409045Z Creates an :class:`ShardedTensor` from local shards and the global metadata. 2025-03-17T18:45:14.0409583Z Needs to be called on all ranks in an SPMD fashion. 2025-03-17T18:45:14.0409860Z 2025-03-17T18:45:14.0409951Z Args: 2025-03-17T18:45:14.0410393Z local_shards (List[:class `torch.distributed._shard.sharded_tensor.Shard`]): A list 2025-03-17T18:45:14.0410971Z of shards that represent the local shards on this rank. 2025-03-17T18:45:14.0411657Z global_size (int...): a list, tuple, or `torch.Size` of integers defining the 2025-03-17T18:45:14.0412154Z shape of the overall sharded tensor. 2025-03-17T18:45:14.0412405Z 2025-03-17T18:45:14.0412501Z Keyword args: 2025-03-17T18:45:14.0412920Z process_group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:14.0413456Z the default process group will be used. 2025-03-17T18:45:14.0413890Z init_rrefs (bool, optional): Whether or not to initialize 2025-03-17T18:45:14.0414405Z :class:`torch.distributed.rpc.RRef`s pointing to remote shards. 2025-03-17T18:45:14.0414950Z Need to initialize the RPC Framework if specified as ``True``. 2025-03-17T18:45:14.0415378Z Default: ``False``. 2025-03-17T18:45:14.0415573Z 2025-03-17T18:45:14.0415664Z Returns: 2025-03-17T18:45:14.0415951Z A :class:`ShardedTensor` object handle on this rank 2025-03-17T18:45:14.0416236Z 2025-03-17T18:45:14.0416240Z 2025-03-17T18:45:14.0416337Z Examples: 2025-03-17T18:45:14.0416736Z Suppose we want construct a sharded tensor on two ranks, global size = (10, 5), 2025-03-17T18:45:14.0417311Z each shard have a (5, 5) local tensor, we can do it like below: 2025-03-17T18:45:14.0417613Z 2025-03-17T18:45:14.0417717Z on rank 0: 2025-03-17T18:45:14.0417986Z >>> # xdoctest: +SKIP("not distributed") 2025-03-17T18:45:14.0418351Z >>> local_shard_metadata = ShardMetadata( 2025-03-17T18:45:14.0418705Z >>> shard_offsets=[0, 0], 2025-03-17T18:45:14.0419015Z >>> shard_lengths=[5, 5], 2025-03-17T18:45:14.0419336Z >>> placement="rank:0/cuda:0" 2025-03-17T18:45:14.0419724Z >>> ) 2025-03-17T18:45:14.0420052Z >>> local_shards = [Shard(torch.randn(5, 5), local_shard_metadata)] 2025-03-17T18:45:14.0420567Z >>> sharded_tensor = init_from_local_shards(local_shards, [10, 5]) 2025-03-17T18:45:14.0420897Z 2025-03-17T18:45:14.0420991Z on rank 1: 2025-03-17T18:45:14.0421264Z >>> # xdoctest: +SKIP("not distributed") 2025-03-17T18:45:14.0421638Z >>> local_shard_metadata = ShardMetadata( 2025-03-17T18:45:14.0421996Z >>> shard_offsets=[5, 0], 2025-03-17T18:45:14.0422307Z >>> shard_lengths=[5, 5], 2025-03-17T18:45:14.0422634Z >>> placement="rank:1/cuda:1" 2025-03-17T18:45:14.0422945Z >>> ) 2025-03-17T18:45:14.0423278Z >>> local_shards = [Shard(torch.randn(5, 5), local_shard_metadata)] 2025-03-17T18:45:14.0423801Z >>> sharded_tensor = init_from_local_shards(local_shards, [10, 5]) 2025-03-17T18:45:14.0424125Z 2025-03-17T18:45:14.0424387Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.0424782Z 2025-03-17T18:45:14.0533299Z msg = Cannot scrape callname=ShardedTensor._init_from_local_tensor in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/api.py line=799. 2025-03-17T18:45:14.0534444Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.0534849Z 2025-03-17T18:45:14.0535136Z Initialize a ShardedTensor given only one local tensor, global sharded tensor 2025-03-17T18:45:14.0535654Z size and sharding spec on each rank. 2025-03-17T18:45:14.0535884Z 2025-03-17T18:45:14.0535974Z Args: 2025-03-17T18:45:14.0536338Z local_tensor (Tensor): Single tensor of local shard stored in each rank. 2025-03-17T18:45:14.0537170Z sharding_spec (:class:`torch.distributed._shard.sharding_spec.ShardingSpec`): 2025-03-17T18:45:14.0537752Z The specification describing how to shard the Tensor. 2025-03-17T18:45:14.0538236Z global_size (Sequence[int]): Size of the sharded tensor. 2025-03-17T18:45:14.0538810Z process_group (ProcessGroup, optional): The process group to aggregate on. 2025-03-17T18:45:14.0539303Z Default: None 2025-03-17T18:45:14.0539652Z init_rrefs (bool, optional): Whether or not to initialize 2025-03-17T18:45:14.0540244Z :class:`torch.distributed.rpc.RRef`s pointing to remote shards. 2025-03-17T18:45:14.0541148Z Need to initialize the RPC Framework if specified as ``True``. 2025-03-17T18:45:14.0541598Z Default: ``False``. 2025-03-17T18:45:14.0541782Z 2025-03-17T18:45:14.0541890Z Returns: 2025-03-17T18:45:14.0542280Z A :class:`ShardedTensor` sharded based on the given sharding_spec with local 2025-03-17T18:45:14.0542809Z tensor stored in the current rank. 2025-03-17T18:45:14.0543040Z 2025-03-17T18:45:14.0543151Z Examples: 2025-03-17T18:45:14.0543390Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.0543713Z >>> # All tensors below are of torch.int64 type. 2025-03-17T18:45:14.0544103Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:14.0544531Z >>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank 2025-03-17T18:45:14.0545045Z >>> local_tensor = torch.unsqueeze(torch.cat([tensor, tensor + 2])) 2025-03-17T18:45:14.0545462Z >>> local_tensor 2025-03-17T18:45:14.0545729Z tensor([[1, 2, 3, 4]]) # Rank 0 2025-03-17T18:45:14.0546039Z tensor([[3, 4, 5, 6]]) # Rank 1 2025-03-17T18:45:14.0546352Z >>> sharding_dim = 0 2025-03-17T18:45:14.0546741Z >>> sharding_spec = ChunkShardingSpec( 2025-03-17T18:45:14.0547095Z dim=sharding_dim, 2025-03-17T18:45:14.0547394Z placements=[ 2025-03-17T18:45:14.0547677Z "rank:0/cuda:0", 2025-03-17T18:45:14.0547978Z "rank:1/cuda:1", 2025-03-17T18:45:14.0548267Z ], 2025-03-17T18:45:14.0548500Z ) 2025-03-17T18:45:14.0548774Z >>> st = ShardedTensor._init_from_local_tensor( 2025-03-17T18:45:14.0549168Z ... local_tensor, sharding_spec, [2, 4] 2025-03-17T18:45:14.0549615Z ... ) 2025-03-17T18:45:14.0549841Z >>> st 2025-03-17T18:45:14.0550064Z ShardedTensor( 2025-03-17T18:45:14.0550338Z ShardedTensorMetadata( 2025-03-17T18:45:14.0550644Z shards_metadata=[ 2025-03-17T18:45:14.0551115Z ShardMetadata(shard_offsets=[0, 0], shard_sizes=[1, 4], placement=rank:0/cuda:0), 2025-03-17T18:45:14.0551791Z ShardMetadata(shard_offsets=[1, 0], shard_sizes=[1, 4], placement=rank:1/cuda:1), 2025-03-17T18:45:14.0552280Z ], 2025-03-17T18:45:14.0552534Z size=torch.Size([2, 4]) 2025-03-17T18:45:14.0552837Z ) 2025-03-17T18:45:14.0553079Z >>> st.local_tensor() 2025-03-17T18:45:14.0553367Z tensor([1, 2, 3, 4]) # Rank 0 2025-03-17T18:45:14.0553674Z tensor([3, 4, 5, 6]) # Rank 1 2025-03-17T18:45:14.0553869Z 2025-03-17T18:45:14.0554161Z Warning: This API is experimental and subject to change. It lacks of a fully across 2025-03-17T18:45:14.0554823Z rank validations, and we only validate the local shard on the current rank. 2025-03-17T18:45:14.0555433Z We fully rely on the user to ensure local tensor is sharded based on the 2025-03-17T18:45:14.0555890Z sharding spec. 2025-03-17T18:45:14.0556057Z 2025-03-17T18:45:14.0556333Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.0556713Z 2025-03-17T18:45:14.0557424Z msg = Cannot scrape callname=ShardedTensor.reshard in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/api.py line=1040. 2025-03-17T18:45:14.0558511Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.0558901Z 2025-03-17T18:45:14.0559176Z Reshard a sharded tensor given the ``resharding_spec``. For now, we only support 2025-03-17T18:45:14.0559670Z single local shard. 2025-03-17T18:45:14.0559824Z 2025-03-17T18:45:14.0560067Z If ``resharding_spec`` is same as the original one, this becomes a no-op. 2025-03-17T18:45:14.0560684Z If only ``resharding_spec`` shares the same sharding dim with the original one, 2025-03-17T18:45:14.0561174Z we swap local shards directly. 2025-03-17T18:45:14.0561647Z For more generic cases, we merge different shards across different ranks and split 2025-03-17T18:45:14.0562303Z the local shards based on the ``resharding_spec`` via `all_to_all` collective API. 2025-03-17T18:45:14.0562683Z 2025-03-17T18:45:14.0562871Z Args: 2025-03-17T18:45:14.0563302Z resharding_spec (:class:`torch.distributed._shard.sharding_spec.ShardingSpec`): The 2025-03-17T18:45:14.0563905Z specification describing how the tensor is sharded. 2025-03-17T18:45:14.0564190Z 2025-03-17T18:45:14.0564294Z Returns: 2025-03-17T18:45:14.0564626Z A :class:`ShardedTensor` object whose local shards are resharded. 2025-03-17T18:45:14.0564967Z 2025-03-17T18:45:14.0565059Z Examples: 2025-03-17T18:45:14.0565297Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.0565601Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:14.0566025Z >>> tensor = torch.arange(4, dtype=torch.int64) + 1 + 2 * rank 2025-03-17T18:45:14.0566455Z >>> tensor = torch.stack([tensor, tensor]) 2025-03-17T18:45:14.0566790Z >>> tensor 2025-03-17T18:45:14.0567055Z tensor([[1, 2, 3, 4], [1, 2, 3, 4]]) # Rank 0 2025-03-17T18:45:14.0567419Z tensor([[3, 4, 5, 6], [3, 4, 5, 6]]) # Rank 1 2025-03-17T18:45:14.0567786Z tensor([[5, 6, 7, 8], [5, 6, 7, 8]]) # Rank 2 2025-03-17T18:45:14.0568151Z tensor([[7, 8, 9, 10], [7, 8, 9, 10]]) # Rank 3 2025-03-17T18:45:14.0568504Z >>> sharding_dim = 0 2025-03-17T18:45:14.0568800Z >>> spec = ChunkShardingSpec( 2025-03-17T18:45:14.0569121Z dim=sharding_dim, 2025-03-17T18:45:14.0569412Z placements=[ 2025-03-17T18:45:14.0569689Z "rank:0/cuda:0", 2025-03-17T18:45:14.0569985Z "rank:1/cuda:1", 2025-03-17T18:45:14.0570268Z "rank:2/cuda:2", 2025-03-17T18:45:14.0570561Z "rank:3/cuda:3", 2025-03-17T18:45:14.0570845Z ], 2025-03-17T18:45:14.0571140Z ) 2025-03-17T18:45:14.0571376Z >>> current_offsets = [0] * 2 2025-03-17T18:45:14.0571695Z >>> current_offsets[0] = rank * 2 2025-03-17T18:45:14.0572043Z >>> shard_metadata = ShardMetadata( 2025-03-17T18:45:14.0572433Z shard_offsets=copy.deepcopy(current_offsets), 2025-03-17T18:45:14.0572825Z shard_sizes=tensor.size(), 2025-03-17T18:45:14.0573185Z placement=spec.placements[rank], 2025-03-17T18:45:14.0573518Z ) 2025-03-17T18:45:14.0573749Z >>> local_shards = [ 2025-03-17T18:45:14.0574017Z Shard( 2025-03-17T18:45:14.0574269Z tensor=tensor, 2025-03-17T18:45:14.0574575Z metadata=shard_metadata, 2025-03-17T18:45:14.0574894Z ) 2025-03-17T18:45:14.0575110Z ] 2025-03-17T18:45:14.0575478Z >>> st = ShardedTensor._init_from_local_shards(local_shards, tensor.size()) 2025-03-17T18:45:14.0575943Z >>> sharding_dim = 1 2025-03-17T18:45:14.0576252Z >>> resharding_spec = ChunkShardingSpec( 2025-03-17T18:45:14.0576607Z dim=sharding_dim, 2025-03-17T18:45:14.0576903Z placements=[ 2025-03-17T18:45:14.0577180Z "rank:0/cuda:0", 2025-03-17T18:45:14.0577477Z "rank:1/cuda:1", 2025-03-17T18:45:14.0577770Z "rank:2/cuda:2", 2025-03-17T18:45:14.0578071Z "rank:3/cuda:3", 2025-03-17T18:45:14.0578357Z ], 2025-03-17T18:45:14.0578590Z ) 2025-03-17T18:45:14.0578835Z >>> st.reshard(resharding_spec) 2025-03-17T18:45:14.0579174Z >>> tensor = st.local_shards()[0].tensor 2025-03-17T18:45:14.0579502Z >>> tensor 2025-03-17T18:45:14.0579780Z tensor([[1], [1], [3], [3], [5], [5], [7], [7]]) # Rank 0 2025-03-17T18:45:14.0580189Z tensor([[2], [2], [4], [4], [6], [6], [8], [8]]) # Rank 1 2025-03-17T18:45:14.0580594Z tensor([[3], [3], [5], [5], [7], [7], [9], [9]]) # Rank 2 2025-03-17T18:45:14.0581005Z tensor([[4], [4], [6], [6], [8], [8], [10], [10]]) # Rank 3 2025-03-17T18:45:14.0581284Z 2025-03-17T18:45:14.0581545Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.0581936Z 2025-03-17T18:45:14.0744321Z msg = Cannot scrape callname=ShardingPlan in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharding_plan/api.py line=12. 2025-03-17T18:45:14.0745607Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.0746021Z 2025-03-17T18:45:14.0746304Z Representation of a sharding plan, describes how to shard a module 2025-03-17T18:45:14.0747040Z across hosts. `plan` is used to shard module parameters according to the spec provided, 2025-03-17T18:45:14.0747788Z `output_plan` and `return_local_tensor` are optional, they are used to specify the output 2025-03-17T18:45:14.0748497Z layout of a module with a spec, and when to convert back to data parallel fashion. 2025-03-17T18:45:14.0748920Z 2025-03-17T18:45:14.0749012Z Args: 2025-03-17T18:45:14.0749485Z plan (Dict[str, Union[:class:`torch.distributed._shard.sharding_spec.ShardingSpec`, 2025-03-17T18:45:14.0750116Z :class:`torch.distributed._shard.sharder.Sharder`]): 2025-03-17T18:45:14.0750694Z a dict describes how to shard a module, there're currently two ways to shard a module: 2025-03-17T18:45:14.0751430Z 1. directly shard a module parameter by a `ShardingSpec`, keyed by the name of 2025-03-17T18:45:14.0752012Z a parameter to a `ShardingSpec`. 2025-03-17T18:45:14.0752584Z 2. shard a submodule by applying a `Sharder` on it, keyed by the name of a module 2025-03-17T18:45:14.0753114Z to a `Sharder` object. 2025-03-17T18:45:14.0753706Z output_plan (Dict[str, :class:`torch.distributed._shard.sharding_spec.ShardingSpec`), optional): 2025-03-17T18:45:14.0754503Z a dict specifies the layout of a module's output which produces a ShardedTensor, 2025-03-17T18:45:14.0755211Z keyed by the name of module to ShardingSpec("" in key means the root module). 2025-03-17T18:45:14.0755842Z Default: `None` 2025-03-17T18:45:14.0756340Z return_local_tensor (List[str], optional): a list of string, each element enables 2025-03-17T18:45:14.0757042Z a module's sharded output to be returned as a Tensor from its local shards to 2025-03-17T18:45:14.0757727Z ensure further processing in a data parallel fashion. ("" in list means the 2025-03-17T18:45:14.0758230Z root module). 2025-03-17T18:45:14.0758548Z Default: None 2025-03-17T18:45:14.0758815Z Example: 2025-03-17T18:45:14.0759293Z Suppose we want to shard a module with two linear layers and then run it with DDP, we also 2025-03-17T18:45:14.0760057Z want to convert the output of the second linear layer back to DDP, we can do it as follows: 2025-03-17T18:45:14.0760477Z 2025-03-17T18:45:14.0760666Z >>> # xdoctest: +REQUIRES(module:torch._C._distributed_c10d) 2025-03-17T18:45:14.0761138Z >>> class MyModule(nn.Module): 2025-03-17T18:45:14.0761516Z >>> def __init__(self) -> None: 2025-03-17T18:45:14.0761884Z >>> super().__init__() 2025-03-17T18:45:14.0762200Z >>> self.fc1 = nn.Linear() 2025-03-17T18:45:14.0762574Z >>> self.gelu = nn.GELU() 2025-03-17T18:45:14.0762902Z >>> self.fc2 = nn.Linear() 2025-03-17T18:45:14.0763288Z >>> self.relu = nn.Linear() 2025-03-17T18:45:14.0763602Z >>> 2025-03-17T18:45:14.0763892Z >>> def forward(self, input): 2025-03-17T18:45:14.0764290Z >>> return self.relu(self.fc2(self.gelu(self.fc1(input)))) 2025-03-17T18:45:14.0764652Z 2025-03-17T18:45:14.0764657Z 2025-03-17T18:45:14.0764794Z >>> # xdoctest: +SKIP("Undefined spec1, spec2) 2025-03-17T18:45:14.0765168Z >>> sharding_plan = ShardingPlan( 2025-03-17T18:45:14.0765543Z >>> plan={ 2025-03-17T18:45:14.0765804Z >>> "fc1.weight": spec1, 2025-03-17T18:45:14.0766178Z >>> "fc2.weight": spec2 2025-03-17T18:45:14.0766483Z >>> }, 2025-03-17T18:45:14.0766730Z >>> output_plan={ 2025-03-17T18:45:14.0767095Z >>> "fc2": output_spec 2025-03-17T18:45:14.0767390Z >>> }, 2025-03-17T18:45:14.0767683Z >>> return_local_tensor=["fc2"] 2025-03-17T18:45:14.0768009Z >>> ) 2025-03-17T18:45:14.0768135Z 2025-03-17T18:45:14.0768448Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.0768914Z 2025-03-17T18:45:14.1609556Z msg = Cannot scrape callname=post_localSGD_hook in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/post_localSGD_hook.py line=72. 2025-03-17T18:45:14.1610718Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.1611111Z 2025-03-17T18:45:14.1611246Z Run post-localSGD algorithm. 2025-03-17T18:45:14.1611444Z 2025-03-17T18:45:14.1611700Z This DDP communication hook is used for running post-localSGD algorithm, 2025-03-17T18:45:14.1612235Z by combining with a model averaging component (e.g., 2025-03-17T18:45:14.1612868Z :class:`~torch.distributed.algorithms.model_averaging.averagers.PeriodicModelAverager`) 2025-03-17T18:45:14.1613455Z that runs after the optimizer step. 2025-03-17T18:45:14.1613674Z 2025-03-17T18:45:14.1613777Z Args: 2025-03-17T18:45:14.1614142Z state (PostLocalSGDState): State information to run post-localSGD. 2025-03-17T18:45:14.1614777Z Users mainly need to tune ``start_localSGD_iter`` to determine when to start local SGD. 2025-03-17T18:45:14.1615622Z bucket (dist.GradBucket): Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. 2025-03-17T18:45:14.1616439Z Note that since DDP comm hook only supports single process single device mode, 2025-03-17T18:45:14.1616982Z only exactly one tensor is stored in this bucket. 2025-03-17T18:45:14.1617265Z 2025-03-17T18:45:14.1617357Z Returns: 2025-03-17T18:45:14.1617738Z Future handler of the communication, which updates the gradients in place. 2025-03-17T18:45:14.1618269Z 2025-03-17T18:45:14.1618376Z Example:: 2025-03-17T18:45:14.1618616Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.1619062Z >>> state = PostLocalSGDState(process_group=process_group, subgroup=subgroup, 2025-03-17T18:45:14.1619590Z start_localSGD_iter=10) 2025-03-17T18:45:14.1620020Z >>> ddp_model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:14.1620666Z >>> # Also need to establish a model averaging module and run model averaging after ``optimizer.step()``. 2025-03-17T18:45:14.1621497Z >>> # Please refer to the examples in ``torch.distributed.algorithms.model_averaging.averagers`` module. 2025-03-17T18:45:14.1621981Z 2025-03-17T18:45:14.1622241Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.1622628Z 2025-03-17T18:45:14.1664071Z msg = Cannot scrape callname=powerSGD_hook in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py line=342. 2025-03-17T18:45:14.1665232Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.1665637Z 2025-03-17T18:45:14.1665757Z Implement PowerSGD algorithm. 2025-03-17T18:45:14.1665971Z 2025-03-17T18:45:14.1666203Z This DDP communication hook implements PowerSGD gradient compression 2025-03-17T18:45:14.1666883Z algorithm described in the `paper `_. 2025-03-17T18:45:14.1667498Z Once gradient tensors are aggregated across all workers, this hook applies 2025-03-17T18:45:14.1667985Z compression as follows: 2025-03-17T18:45:14.1668169Z 2025-03-17T18:45:14.1668621Z 1. Views the input flattened 1D gradient tensor as a list of per-parameter tensors, and divides all the tensors into two groups: 2025-03-17T18:45:14.1669202Z 2025-03-17T18:45:14.1669633Z 1.1 The tensors that should be compressed before allreduce, because the compression can give enough saving in bandwidth. 2025-03-17T18:45:14.1670203Z 2025-03-17T18:45:14.1670624Z 1.2 Rest of the tensors will be directly allreduced without compression, including all the vector tensors (for biases). 2025-03-17T18:45:14.1671174Z 2025-03-17T18:45:14.1671290Z 2. Handles uncompressed tensors: 2025-03-17T18:45:14.1671510Z 2025-03-17T18:45:14.1672169Z 2.1. Allocate contiguous memory for those uncompressed tensors, and allreduces all the uncompressed tensors as a batch, without compression; 2025-03-17T18:45:14.1672824Z 2025-03-17T18:45:14.1673173Z 2.2. Copies the individual uncompressed tensors from the contiguous memory back to the input tensor. 2025-03-17T18:45:14.1673661Z 2025-03-17T18:45:14.1673900Z 3. Handles the tensors that should be compressed by PowerSGD compression: 2025-03-17T18:45:14.1674277Z 2025-03-17T18:45:14.1674542Z 3.1. For each tensor M, creates two low-rank tensors P and Q for decomposing M, 2025-03-17T18:45:14.1675233Z such that M = PQ^T, where Q is initialized from a standard normal distribution and orthogonalized; 2025-03-17T18:45:14.1675688Z 2025-03-17T18:45:14.1675843Z 3.2. Computes each P in Ps, which is equal to MQ; 2025-03-17T18:45:14.1676119Z 2025-03-17T18:45:14.1676234Z 3.3. Allreduces Ps as a batch; 2025-03-17T18:45:14.1676454Z 2025-03-17T18:45:14.1676576Z 3.4. Orthogonalizes each P in Ps; 2025-03-17T18:45:14.1676808Z 2025-03-17T18:45:14.1677014Z 3.5. Computes each Q in Qs, which is approximately equal to M^TP; 2025-03-17T18:45:14.1677347Z 2025-03-17T18:45:14.1677459Z 3.6. Allreduces Qs as a batch; 2025-03-17T18:45:14.1677680Z 2025-03-17T18:45:14.1677986Z 3.7. Computes each M among all the compressed tensors, which is approximately equal to PQ^T. 2025-03-17T18:45:14.1678423Z 2025-03-17T18:45:14.1678843Z Note that this communication hook enforces vanilla allreduce for the first ``state.start_powerSGD_iter`` iterations. 2025-03-17T18:45:14.1679691Z This not only gives the user more control over the tradeoff between speedup and accuracy, 2025-03-17T18:45:14.1680558Z but also helps abstract away some complexity of the internal optimization of DDP for future communication hook developers. 2025-03-17T18:45:14.1681238Z 2025-03-17T18:45:14.1681341Z Args: 2025-03-17T18:45:14.1681910Z state (PowerSGDState): State information to configure the compression rate and support error feedback, warm start, etc. 2025-03-17T18:45:14.1682854Z To tune the compression configs, mainly need to tune ``matrix_approximation_rank``, ``start_powerSGD_iter`` 2025-03-17T18:45:14.1683470Z and ``min_compression_rate``. 2025-03-17T18:45:14.1684133Z bucket (dist.GradBucket): Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. 2025-03-17T18:45:14.1684952Z Note that since DDP comm hook only supports single process single device mode, 2025-03-17T18:45:14.1685502Z only exactly one tensor is stored in this bucket. 2025-03-17T18:45:14.1685789Z 2025-03-17T18:45:14.1685882Z Returns: 2025-03-17T18:45:14.1686274Z Future handler of the communication, which updates the gradients in place. 2025-03-17T18:45:14.1686665Z 2025-03-17T18:45:14.1686770Z Example:: 2025-03-17T18:45:14.1687019Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.1687490Z >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, 2025-03-17T18:45:14.1688073Z start_powerSGD_iter=10, min_compression_rate=0.5) 2025-03-17T18:45:14.1688533Z >>> ddp_model.register_comm_hook(state, powerSGD_hook) 2025-03-17T18:45:14.1688824Z 2025-03-17T18:45:14.1689082Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.1689472Z 2025-03-17T18:45:14.1717258Z msg = Cannot scrape callname=PeriodicModelAverager in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/model_averaging/averagers.py line=38. 2025-03-17T18:45:14.1718417Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.1718818Z 2025-03-17T18:45:14.1719026Z Averages parameters periodically after the warm-up stage. 2025-03-17T18:45:14.1719350Z 2025-03-17T18:45:14.1719617Z This can be used for running `post-local SGD `_, 2025-03-17T18:45:14.1720209Z by running :class:`~torch.nn.DistributedDataParallel` (DDP) 2025-03-17T18:45:14.1720771Z using the subgroups created by :meth:`~torch.distributed.new_subgroups`. 2025-03-17T18:45:14.1721240Z 2025-03-17T18:45:14.1721333Z Args: 2025-03-17T18:45:14.1721638Z period (int): The number of steps per model averaging. 2025-03-17T18:45:14.1722208Z Usually the period should be greater than ``1`` to reduce the communication cost. 2025-03-17T18:45:14.1722759Z Otherwise, only DDP needs to be used. 2025-03-17T18:45:14.1723227Z warmup_steps (int): The number of warm-up steps. During this stage, 2025-03-17T18:45:14.1723690Z model averaging is skipped. 2025-03-17T18:45:14.1724131Z process_group: The process group to be used for all-reduce. 2025-03-17T18:45:14.1724599Z If ``None``, the default process group, which 2025-03-17T18:45:14.1725070Z is created by :func:`torch.distributed.init_process_group`, 2025-03-17T18:45:14.1725524Z will be used. (default: ``None``) 2025-03-17T18:45:14.1725762Z 2025-03-17T18:45:14.1725878Z Example:: 2025-03-17T18:45:14.1726007Z 2025-03-17T18:45:14.1726157Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.1726508Z >>> import torch 2025-03-17T18:45:14.1726796Z >>> import torch.distributed as dist 2025-03-17T18:45:14.1727356Z >>> import torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook as post_localSGD 2025-03-17T18:45:14.1728083Z >>> import torch.distributed.algorithms.model_averaging.averagers as averagers 2025-03-17T18:45:14.1728599Z >>> import torch.nn as nn 2025-03-17T18:45:14.1728886Z >>> 2025-03-17T18:45:14.1729202Z >>> dist.init_process_group("nccl", rank=rank, world_size=16) 2025-03-17T18:45:14.1729678Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:14.1730041Z >>> module = nn.Linear(1, 1, bias=False).cuda() 2025-03-17T18:45:14.1730460Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:14.1730890Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:14.1731253Z >>> ) 2025-03-17T18:45:14.1731548Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:14.1732128Z >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100) 2025-03-17T18:45:14.1732727Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:14.1733107Z >>> 2025-03-17T18:45:14.1733510Z >>> # In the first 100 steps, run global gradient averaging like normal DDP at every step. 2025-03-17T18:45:14.1734073Z >>> # After 100 steps, run model averaging every 4 steps. 2025-03-17T18:45:14.1734676Z >>> # Note that ``warmup_steps`` must be the same as ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:14.1735384Z >>> averager = averagers.PeriodicModelAverager(period=4, warmup_steps=100) 2025-03-17T18:45:14.1735876Z >>> for step in range(0, 200): 2025-03-17T18:45:14.1736202Z >>> optimizer.zero_grad() 2025-03-17T18:45:14.1736530Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:14.1737216Z >>> loss.backward() 2025-03-17T18:45:14.1737659Z >>> optimizer.step() 2025-03-17T18:45:14.1738221Z >>> # Will average model parameters globally every 4 steps. Thus, 2025-03-17T18:45:14.1738863Z >>> # inter-node communication only occurs every 4 iterations after 2025-03-17T18:45:14.1739330Z >>> # the initial ``warmup_steps`` period. 2025-03-17T18:45:14.1739735Z >>> averager.average_parameters(model.parameters()) 2025-03-17T18:45:14.1740028Z 2025-03-17T18:45:14.1740287Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.1740675Z 2025-03-17T18:45:14.1741587Z msg = Cannot scrape callname=HierarchicalModelAverager in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/model_averaging/hierarchical_model_averager.py line=19. 2025-03-17T18:45:14.1742850Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.1743237Z 2025-03-17T18:45:14.1743727Z Runs hierarchical model averaging (`hierarchical SGD `_). 2025-03-17T18:45:14.1744192Z 2025-03-17T18:45:14.1744523Z Process groups of different sizes are organized in a hierarchy, and they average parameters 2025-03-17T18:45:14.1745177Z by using different periods concurrently after the warm-up stage. 2025-03-17T18:45:14.1745921Z This is an extension of :class:`~torch.distributed.algorithms.model_averaging.averagers.PeriodicModelAverager` 2025-03-17T18:45:14.1746875Z that supports `post-local SGD `_, which essentially only supports 2025-03-17T18:45:14.1747662Z a two-level hierarchy: the intra-machine level and the global level, where the intra-machine 2025-03-17T18:45:14.1748471Z level is usually embedded in :meth:`~torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook`. 2025-03-17T18:45:14.1749268Z Similarly, the process groups within this class do not have such an intra-machine process 2025-03-17T18:45:14.1749996Z subgroup, which should be embedded by the post-local SGD communication hook instead. 2025-03-17T18:45:14.1750403Z 2025-03-17T18:45:14.1750508Z Args: 2025-03-17T18:45:14.1750909Z period_group_size_dict: An ordered dict mapping keys of model averaging period to 2025-03-17T18:45:14.1751520Z process group size, used for initializing process groups of 2025-03-17T18:45:14.1752089Z different sizes in a hierarchy to average parameters concurrently. 2025-03-17T18:45:14.1752667Z Particularly, at each iteration, there will be at most a single 2025-03-17T18:45:14.1753253Z process group that runs averaging -- the period of such group should 2025-03-17T18:45:14.1753919Z have the largest period which the current step can be divided by. 2025-03-17T18:45:14.1754445Z For example, if the dict has three keys: 2, 4, and 8, 2025-03-17T18:45:14.1754968Z then this means totally three process groups will be created to 2025-03-17T18:45:14.1755519Z average parameters every 2, 4, and 8 iterations, respectively. 2025-03-17T18:45:14.1756062Z At the 4th iteration, only the second process group will run 2025-03-17T18:45:14.1756576Z averaging, because the first process group should be a 2025-03-17T18:45:14.1757118Z subset of the second process group, and no need to execute the first 2025-03-17T18:45:14.1757608Z process group redundantly. 2025-03-17T18:45:14.1758068Z On the other hand, the third process group can only be triggered 2025-03-17T18:45:14.1758651Z every 8 iterations, so it will not be triggered at the 4th iteration. 2025-03-17T18:45:14.1759328Z warmup_steps (int): The number of warm-up steps. During this stage, model averaging is skipped. 2025-03-17T18:45:14.1760231Z process_group (ProcessGroup, optional): The overall process group containing all the processes that runs model averaging. 2025-03-17T18:45:14.1760999Z If ``None``, the default process group, which is created 2025-03-17T18:45:14.1761526Z by :func:`torch.distributed.init_process_group`, will be used. 2025-03-17T18:45:14.1761997Z (default: ``None``) 2025-03-17T18:45:14.1762256Z 2025-03-17T18:45:14.1762359Z Example:: 2025-03-17T18:45:14.1762623Z >>> # xdoctest: +SKIP('undefined rank') 2025-03-17T18:45:14.1762999Z >>> from collections import OrderedDict 2025-03-17T18:45:14.1763346Z >>> import torch 2025-03-17T18:45:14.1763638Z >>> import torch.distributed as dist 2025-03-17T18:45:14.1764156Z >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( 2025-03-17T18:45:14.1764683Z >>> PostLocalSGDState, 2025-03-17T18:45:14.1764996Z >>> post_localSGD_hook, 2025-03-17T18:45:14.1765362Z >>> ) 2025-03-17T18:45:14.1765882Z >>> import torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD 2025-03-17T18:45:14.1766504Z >>> import torch.nn as nn 2025-03-17T18:45:14.1766794Z >>> 2025-03-17T18:45:14.1767107Z >>> dist.init_process_group("nccl", rank=rank, world_size=16) 2025-03-17T18:45:14.1785989Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:14.1786546Z >>> module = nn.Linear(1, 1, bias=False).to(rank) 2025-03-17T18:45:14.1786986Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:14.1787431Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:14.1787796Z >>> ) 2025-03-17T18:45:14.1788088Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:14.1788648Z >>> # Assume that each machine has 4 GPUs, then each intra-machine subgroup has a size of 4. 2025-03-17T18:45:14.1789190Z >>> subgroup, _ = dist.new_subgroups() 2025-03-17T18:45:14.1789764Z >>> state = PostLocalSGDState(process_group=None, subgroup=subgroup, start_localSGD_iter=100) 2025-03-17T18:45:14.1790386Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:14.1790765Z >>> 2025-03-17T18:45:14.1791193Z >>> # Average parameters among each group of 8 processes every 4 iterations, and among all 2025-03-17T18:45:14.1791742Z >>> # the 16 processes every 16 iterations. 2025-03-17T18:45:14.1792178Z >>> averager = hierarchicalSGD.HierarchicalModelAverager( 2025-03-17T18:45:14.1792727Z >>> period_group_size_dict=OrderedDict([(4, 8), (16, 16)]), warmup_steps=100) 2025-03-17T18:45:14.1793534Z >>> # Note that ``warmup_steps`` must be the same as ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:14.1794264Z >>> # In the first 100 steps, run global gradient averaging like normal DDP at every step. 2025-03-17T18:45:14.1794829Z >>> # After 100 steps, run model averaging at two levels. 2025-03-17T18:45:14.1795230Z >>> for step in range(0, 200): 2025-03-17T18:45:14.1795544Z >>> optimizer.zero_grad() 2025-03-17T18:45:14.1795869Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:14.1796206Z >>> loss.backward() 2025-03-17T18:45:14.1796499Z >>> optimizer.step() 2025-03-17T18:45:14.1796850Z >>> # Average parameters after ``optimizer.step()``. 2025-03-17T18:45:14.1797425Z >>> # Thus, the inter-node communication only occurs periodically after ``warmup_steps``. 2025-03-17T18:45:14.1798003Z >>> averager.average_parameters(model.parameters()) 2025-03-17T18:45:14.1798290Z 2025-03-17T18:45:14.1798396Z .. warning :: 2025-03-17T18:45:14.1798801Z The last group size in the dict must be the size of the provided ``process_group``, 2025-03-17T18:45:14.1799431Z which indicates model averaging at the highest level of the hierarchy. 2025-03-17T18:45:14.1800105Z If ``process_group`` is not provided, then the last group size should be equal to the world size. 2025-03-17T18:45:14.1800550Z 2025-03-17T18:45:14.1800650Z .. warning :: 2025-03-17T18:45:14.1801034Z `HierarchicalModelAverager` is experimental and subject to change. 2025-03-17T18:45:14.1801390Z 2025-03-17T18:45:14.1801661Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.1802048Z 2025-03-17T18:45:14.2114465Z msg = Cannot scrape callname=BroadcastingTorchSaveReader in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/format_utils.py line=40. 2025-03-17T18:45:14.2115598Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2116010Z 2025-03-17T18:45:14.2116311Z StorageReader for reading a Torch Save file. This reader will read the entire checkpoint 2025-03-17T18:45:14.2116997Z on the coordinator rank, and then broadcast and shard each tensor to all ranks. 2025-03-17T18:45:14.2117434Z 2025-03-17T18:45:14.2117685Z . N.B. Intended to be used with DynamicMetaLoadPlanner 2025-03-17T18:45:14.2117980Z 2025-03-17T18:45:14.2118219Z .. warning:: 2025-03-17T18:45:14.2118559Z Current implementation only supports loading Tensors. 2025-03-17T18:45:14.2118952Z 2025-03-17T18:45:14.2119073Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2119414Z >>> sd = {"mode": model} 2025-03-17T18:45:14.2119686Z >>> dcp.load( 2025-03-17T18:45:14.2119924Z >>> sd, 2025-03-17T18:45:14.2120214Z >>> storage_reader=BroadcastingTorchSaveReader(), 2025-03-17T18:45:14.2120616Z >>> planner=DynamicMetaLoadPlanner(), 2025-03-17T18:45:14.2120981Z >>> checkpoint_id="path_to_model.pt" 2025-03-17T18:45:14.2121307Z >>> ) 2025-03-17T18:45:14.2121428Z 2025-03-17T18:45:14.2121699Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2122078Z 2025-03-17T18:45:14.2122796Z msg = Cannot scrape callname=DynamicMetaLoadPlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/format_utils.py line=151. 2025-03-17T18:45:14.2123887Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2124273Z 2025-03-17T18:45:14.2124662Z Extension of DefaultLoadPlanner, which creates a new Metadata object based on the passed in state dict, 2025-03-17T18:45:14.2125500Z avoiding the need to read metadata from disk. This is useful when reading formats which don't have a 2025-03-17T18:45:14.2126085Z metadata file, like Torch Save files. 2025-03-17T18:45:14.2126307Z 2025-03-17T18:45:14.2126508Z . N.B. Intended to be used with BroadcastingTorchSaveReader 2025-03-17T18:45:14.2126810Z 2025-03-17T18:45:14.2126923Z .. warning:: 2025-03-17T18:45:14.2127334Z Current implementation only supports loading Tensors. 2025-03-17T18:45:14.2127623Z 2025-03-17T18:45:14.2127755Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2128090Z >>> sd = {"mode": model} 2025-03-17T18:45:14.2128351Z >>> dcp.load( 2025-03-17T18:45:14.2128586Z >>> sd, 2025-03-17T18:45:14.2128878Z >>> storage_reader=BroadcastingTorchSaveReader(), 2025-03-17T18:45:14.2129290Z >>> planner=DynamicMetaLoadPlanner(), 2025-03-17T18:45:14.2129657Z >>> checkpoint_id="path_to_model.pt" 2025-03-17T18:45:14.2129979Z >>> ) 2025-03-17T18:45:14.2130111Z 2025-03-17T18:45:14.2130371Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2130765Z 2025-03-17T18:45:14.2193767Z msg = Cannot scrape callname=load_sharded_optimizer_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/optimizer.py line=221. 2025-03-17T18:45:14.2194898Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2195317Z 2025-03-17T18:45:14.2195536Z Load a state_dict in conjunction with FSDP sharded optimizer state. 2025-03-17T18:45:14.2195887Z 2025-03-17T18:45:14.2196058Z This is the current recommended way to checkpoint FSDP. 2025-03-17T18:45:14.2196454Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.2196791Z >>> import torch.distributed.checkpoint as dist_cp 2025-03-17T18:45:14.2197166Z >>> # Save 2025-03-17T18:45:14.2197410Z >>> model: torch.nn.Model 2025-03-17T18:45:14.2197721Z >>> optim_params = model.parameters() 2025-03-17T18:45:14.2198106Z >>> optim = torch.optim.SGD(optim_params, lr=0.01) 2025-03-17T18:45:14.2198464Z >>> # Save 2025-03-17T18:45:14.2198824Z >>> with FSDP.state_dict_type(model, StateDictType.SHARDED_STATE_DICT): 2025-03-17T18:45:14.2199264Z >>> state_dict = { 2025-03-17T18:45:14.2199596Z >>> "optimizer": FSDP.optim_state_dict(model, optim), 2025-03-17T18:45:14.2199994Z >>> "model": model.state_dict() 2025-03-17T18:45:14.2200319Z >>> } 2025-03-17T18:45:14.2200562Z >>> dist_cp.save_state_dict( 2025-03-17T18:45:14.2200879Z >>> state_dict=optim_state, 2025-03-17T18:45:14.2201284Z >>> storage_writer=dist_cp.FileSystemWriter("checkpoint"), 2025-03-17T18:45:14.2201731Z >>> planner=dist_cp.DefaultSavePlanner(), 2025-03-17T18:45:14.2202078Z >>> ) 2025-03-17T18:45:14.2202298Z >>> 2025-03-17T18:45:14.2202642Z >>> # Load 2025-03-17T18:45:14.2203022Z >>> with FSDP.state_dict_type(model_tp, StateDictType.SHARDED_STATE_DICT): 2025-03-17T18:45:14.2203521Z >>> model_state_dict = model_tp.state_dict() 2025-03-17T18:45:14.2203881Z >>> checkpoint = { 2025-03-17T18:45:14.2204173Z >>> "model": model_state_dict 2025-03-17T18:45:14.2204484Z >>> } 2025-03-17T18:45:14.2204712Z >>> dist_cp.load_state_dict( 2025-03-17T18:45:14.2205028Z >>> state_dict=checkpoint, 2025-03-17T18:45:14.2205435Z >>> storage_reader=dist_cp.FileSystemReader(checkpoint_file), 2025-03-17T18:45:14.2205896Z >>> planner=dist_cp.DefaultLoadPlanner(), 2025-03-17T18:45:14.2206238Z >>> ) 2025-03-17T18:45:14.2206527Z >>> model.load_state_dict(checkpoint["model_state"]) 2025-03-17T18:45:14.2206887Z >>> 2025-03-17T18:45:14.2207184Z >>> optim_state = dist_cp.load_sharded_optimizer_state_dict( 2025-03-17T18:45:14.2207588Z >>> model_state_dict, 2025-03-17T18:45:14.2207898Z >>> optimizer_key="optimizer", 2025-03-17T18:45:14.2208308Z >>> storage_reader=dist_cp.FileSystemReader("checkpoint"), 2025-03-17T18:45:14.2208699Z >>> ) 2025-03-17T18:45:14.2208916Z >>> 2025-03-17T18:45:14.2209189Z >>> flattened_osd = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:14.2209588Z >>> model, optim, optim_state["optimizer"] 2025-03-17T18:45:14.2209927Z >>> ) 2025-03-17T18:45:14.2210146Z >>> 2025-03-17T18:45:14.2210384Z >>> optim.load_state_dict(flattened_osd) 2025-03-17T18:45:14.2210630Z 2025-03-17T18:45:14.2210889Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2211362Z 2025-03-17T18:45:14.2224178Z msg = Cannot scrape callname=SavePlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/planner.py line=113. 2025-03-17T18:45:14.2225332Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2225721Z 2025-03-17T18:45:14.2226040Z Abstract class defining the protocol used by save_state_dict to plan the save process. 2025-03-17T18:45:14.2226511Z 2025-03-17T18:45:14.2226835Z SavePlanners are stateful objects that can be used to customize the whole save process. 2025-03-17T18:45:14.2227260Z 2025-03-17T18:45:14.2227560Z SavePlanner acts as an access proxy to the state_dict, so any transformation done to it 2025-03-17T18:45:14.2228096Z will be visible to the whole process. 2025-03-17T18:45:14.2228319Z 2025-03-17T18:45:14.2228618Z A planner subclass can expect the following sequence of calls during save_state_dict: 2025-03-17T18:45:14.2229028Z 2025-03-17T18:45:14.2229167Z 1) set_up_planner - called on all ranks. 2025-03-17T18:45:14.2229535Z Signals the start of a checkpoint save. 2025-03-17T18:45:14.2229771Z 2025-03-17T18:45:14.2229911Z 2) create_local_plan - called on all ranks. 2025-03-17T18:45:14.2230613Z Process the state_dict and produces a `SavePlan` that will be sent for global planning. 2025-03-17T18:45:14.2231072Z 2025-03-17T18:45:14.2231381Z 3) create_global_plan - called on the coordinator rank only. 2025-03-17T18:45:14.2232288Z Takes the SavePlan from all ranks and make any global decision. 2025-03-17T18:45:14.2232608Z 2025-03-17T18:45:14.2232741Z 4) finish_plan - called on all ranks. 2025-03-17T18:45:14.2233193Z This gives each rank a chance to adjust to global planning decisions. 2025-03-17T18:45:14.2233536Z 2025-03-17T18:45:14.2233710Z 5) resolve_data - called multiple times on each rank 2025-03-17T18:45:14.2234203Z Lookups a value on the `state_dict` for the storage layer to write. 2025-03-17T18:45:14.2234534Z 2025-03-17T18:45:14.2234860Z Users are recommended to extend DefaultSavePlanner instead of this interface directly as 2025-03-17T18:45:14.2235490Z most changes can be expressed by changes in a single method. 2025-03-17T18:45:14.2235793Z 2025-03-17T18:45:14.2235932Z There are 3 usual patterns of extension: 2025-03-17T18:45:14.2236165Z 2025-03-17T18:45:14.2236542Z Rewriting state_dict. This is the simplest way to extend the save process as it 2025-03-17T18:45:14.2237352Z doesn't requite understanding the intrincacies of how SavePlan works: 2025-03-17T18:45:14.2237705Z 2025-03-17T18:45:14.2237839Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2238214Z >>> class RenamePlanner(DefaultSavePlanner): 2025-03-17T18:45:14.2238580Z >>> def set_up_planner( 2025-03-17T18:45:14.2238854Z >>> self, 2025-03-17T18:45:14.2239120Z >>> state_dict: STATE_DICT_TYPE, 2025-03-17T18:45:14.2239477Z >>> storage_meta: Optional[StorageMeta], 2025-03-17T18:45:14.2239835Z >>> is_coordinator: bool, 2025-03-17T18:45:14.2240146Z >>> ) -> None: 2025-03-17T18:45:14.2240414Z >>> # prefix all keys with `foo_`` 2025-03-17T18:45:14.2240925Z >>> super().set_up_planner({"foo_" + k: v for k, v in state_dict.items()}, storage_meta, is_coordinator) 2025-03-17T18:45:14.2241348Z 2025-03-17T18:45:14.2241694Z Modifying local plan and lookup in tandem. This is useful when fine control of how data is persisted 2025-03-17T18:45:14.2242165Z 2025-03-17T18:45:14.2242284Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2242647Z >>> class FP16Planner(DefaultSavePlanner): 2025-03-17T18:45:14.2243005Z >>> def create_local_plan(self): 2025-03-17T18:45:14.2243349Z >>> plan = super().create_local_plan() 2025-03-17T18:45:14.2243695Z >>> for p in plan: 2025-03-17T18:45:14.2244008Z >>> if p.tensor_data is not None: 2025-03-17T18:45:14.2244413Z >>> p.tensor_data.properties.dtype = torch.float16 2025-03-17T18:45:14.2244802Z >>> return plan 2025-03-17T18:45:14.2245180Z >>> 2025-03-17T18:45:14.2245427Z >>> def resolve_data(self, write_item): 2025-03-17T18:45:14.2245796Z >>> item = super().resolve_data(write_item) 2025-03-17T18:45:14.2246330Z >>> return item if write_item.type == WriteItemType.BYTE_IO else item.to(torch.float16) 2025-03-17T18:45:14.2246731Z 2025-03-17T18:45:14.2247101Z Using the global planning step to make central decisions that can't be made individually by each rank 2025-03-17T18:45:14.2247574Z 2025-03-17T18:45:14.2247705Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2248059Z >>> from itertools import zip_longest 2025-03-17T18:45:14.2248394Z >>> from dataclasses import replace 2025-03-17T18:45:14.2248793Z >>> class DDPLoadBalancingPlanner(DefaultSavePlanner): 2025-03-17T18:45:14.2249369Z >>> # This uses the default local plan behavior of having all non-sharded writes in rank 0 2025-03-17T18:45:14.2249923Z >>> # This sample doesn't handle ShardedTensors 2025-03-17T18:45:14.2250310Z >>> def create_global_plan(self, all_plans): 2025-03-17T18:45:14.2250722Z >>> iters = [iter(all_plans[0].items)] * len(all_plans) 2025-03-17T18:45:14.2251105Z >>> items_per_rank = [ 2025-03-17T18:45:14.2251444Z >>> [item for item in items if item is not None] 2025-03-17T18:45:14.2251869Z >>> for items in zip(*zip_longest(*iters), strict=True) 2025-03-17T18:45:14.2252252Z >>> ] 2025-03-17T18:45:14.2252495Z >>> all_plans = [ 2025-03-17T18:45:14.2252791Z >>> replace(plan, items=items) 2025-03-17T18:45:14.2253221Z >>> for plan, items in zip(all_plans, items_per_rank, strict=True) 2025-03-17T18:45:14.2253633Z >>> ] 2025-03-17T18:45:14.2253921Z >>> return super().create_global_plan(all_plans) 2025-03-17T18:45:14.2254179Z 2025-03-17T18:45:14.2254463Z Finally, some planners need to save additional metadata in the checkpoint, this is 2025-03-17T18:45:14.2255143Z accomplished by having each rank contribute their data items in the local plan and 2025-03-17T18:45:14.2255674Z the global planner aggregate them: 2025-03-17T18:45:14.2255889Z 2025-03-17T18:45:14.2256021Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2256415Z >>> class SaveExtraDataPlanner(DefaultSavePlanner): 2025-03-17T18:45:14.2256830Z >>> def create_local_plan(self) -> SavePlan: 2025-03-17T18:45:14.2257291Z >>> plan = super().create_local_plan() 2025-03-17T18:45:14.2257706Z >>> return replace(plan, planner_data="per-rank-data") 2025-03-17T18:45:14.2258086Z >>> 2025-03-17T18:45:14.2258523Z >>> def create_global_plan(self, all_plans: List[SavePlan]) -> Tuple[List[SavePlan], Metadata]: 2025-03-17T18:45:14.2259156Z >>> global_plan, metadata = super().create_global_plan(all_plans) 2025-03-17T18:45:14.2259652Z >>> merged_data = [p.planner_data for p in global_plan] 2025-03-17T18:45:14.2260116Z >>> metadata = replace(metadata, planner_data=merged_data) 2025-03-17T18:45:14.2260539Z >>> return global_plan, metadata 2025-03-17T18:45:14.2260778Z 2025-03-17T18:45:14.2261038Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2261427Z 2025-03-17T18:45:14.2262064Z msg = Cannot scrape callname=LoadPlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/planner.py line=293. 2025-03-17T18:45:14.2263066Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2263465Z 2025-03-17T18:45:14.2263772Z Abstract class defining the protocol used by load_state_dict to plan the load process. 2025-03-17T18:45:14.2264188Z 2025-03-17T18:45:14.2264498Z LoadPlanner are stateful objects that can be used to customize the whole load process. 2025-03-17T18:45:14.2264917Z 2025-03-17T18:45:14.2265222Z LoadPlanner acts as an access proxy to the state_dict, so any transformation done to it 2025-03-17T18:45:14.2265766Z will be visible to the whole process. 2025-03-17T18:45:14.2265988Z 2025-03-17T18:45:14.2266340Z A planner subclass can expect the following sequence of calls during load_state_dict: 2025-03-17T18:45:14.2266840Z 2025-03-17T18:45:14.2266981Z 1) set_up_planner - called on all ranks. 2025-03-17T18:45:14.2267351Z Signals the start of loading a checkpoint. 2025-03-17T18:45:14.2267608Z 2025-03-17T18:45:14.2267739Z 2) create_local_plan - called on all ranks. 2025-03-17T18:45:14.2268283Z Process the state_dict and produces a `LoadPlan` that will be sent for global planning. 2025-03-17T18:45:14.2268710Z 2025-03-17T18:45:14.2268898Z 3) create_global_plan - called on the coordinator rank only. 2025-03-17T18:45:14.2269412Z Takes the LoadPlan from all ranks and make any global decision. 2025-03-17T18:45:14.2269744Z 2025-03-17T18:45:14.2269895Z 4) load_bytes - called multiple times on each rank 2025-03-17T18:45:14.2270341Z This is called once per non-tensor value in state_dict. 2025-03-17T18:45:14.2270639Z 2025-03-17T18:45:14.2270867Z 5) resolve_tensor and commit_tensor - called multiple times on each rank 2025-03-17T18:45:14.2271416Z They are called in pair for each Tensor value in state_dict. 2025-03-17T18:45:14.2271732Z 2025-03-17T18:45:14.2272041Z Users are recommended to extend DefaultLoadPlanner instead of this interface directly as 2025-03-17T18:45:14.2272672Z most changes can be expressed by changes in a single method. 2025-03-17T18:45:14.2272986Z 2025-03-17T18:45:14.2273122Z There are two usual patterns of extension: 2025-03-17T18:45:14.2273376Z 2025-03-17T18:45:14.2273633Z Rewriting state_dict. This is the simplest way to extend the load process as it 2025-03-17T18:45:14.2274288Z doesn't requite understanding the intrincacies of how LoadPlan works. We need 2025-03-17T18:45:14.2274913Z to keep a reference to the original state_dict as load happens in place so 2025-03-17T18:45:14.2275407Z we need to be able to perform it in place 2025-03-17T18:45:14.2275657Z 2025-03-17T18:45:14.2275775Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2276150Z >>> class RenamePlanner(DefaultLoadPlanner): 2025-03-17T18:45:14.2276519Z >>> def set_up_planner( 2025-03-17T18:45:14.2276803Z >>> self, 2025-03-17T18:45:14.2277071Z >>> state_dict: STATE_DICT_TYPE, 2025-03-17T18:45:14.2277410Z >>> metadata: Metadata, 2025-03-17T18:45:14.2277720Z >>> is_coordinator: bool, 2025-03-17T18:45:14.2278029Z >>> ) -> None: 2025-03-17T18:45:14.2278373Z >>> self.original_state_dict = state_dict 2025-03-17T18:45:14.2278811Z >>> state_dict = {"foo_" + k: v for k, v in state_dict.items()} 2025-03-17T18:45:14.2279208Z >>> 2025-03-17T18:45:14.2279449Z >>> if self.flatten_sharded_tensors: 2025-03-17T18:45:14.2279848Z >>> state_dict = _flatten_sharded_tensors(state_dict) 2025-03-17T18:45:14.2280216Z >>> 2025-03-17T18:45:14.2280453Z >>> if self.flatten_state_dict: 2025-03-17T18:45:14.2280868Z >>> state_dict, self.mappings = flatten_state_dict(state_dict) 2025-03-17T18:45:14.2281273Z >>> 2025-03-17T18:45:14.2281514Z >>> self.state_dict = state_dict 2025-03-17T18:45:14.2281856Z >>> self.metadata = metadata 2025-03-17T18:45:14.2282201Z >>> self.is_coordinator = is_coordinator 2025-03-17T18:45:14.2282533Z >>> 2025-03-17T18:45:14.2282781Z >>> def load_bytes(self, read_item, value): 2025-03-17T18:45:14.2283140Z >>> # Remove the "foo_" prefix 2025-03-17T18:45:14.2283675Z >>> self.original_state_dict[read_item.dest_index.fqn[4:]] = torch.load(value, weights_only=False) 2025-03-17T18:45:14.2284125Z 2025-03-17T18:45:14.2284129Z 2025-03-17T18:45:14.2284413Z Modifying resolve_tensor and commit_tensor to handle load time transformation. 2025-03-17T18:45:14.2284801Z 2025-03-17T18:45:14.2284933Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.2285324Z >>> class MetaModelMaterialize(DefaultSavePlanner): 2025-03-17T18:45:14.2285728Z >>> def resolve_tensor(self, read_item): 2025-03-17T18:45:14.2286104Z >>> tensor = super().resolve_tensor(read_item) 2025-03-17T18:45:14.2286569Z >>> return torch.empty_like(tensor, device="cpu") 2025-03-17T18:45:14.2286928Z >>> 2025-03-17T18:45:14.2287178Z >>> def commit_tensor(self, read_item, tensor): 2025-03-17T18:45:14.2287602Z >>> self.state_dict[read_item.dest_index.fqn] = tensor 2025-03-17T18:45:14.2287894Z 2025-03-17T18:45:14.2288155Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2288554Z 2025-03-17T18:45:14.2409747Z msg = Cannot scrape callname=get_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py line=1124. 2025-03-17T18:45:14.2410783Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2411175Z 2025-03-17T18:45:14.2411360Z Return the model state_dict and optimizers state_dict. 2025-03-17T18:45:14.2411641Z 2025-03-17T18:45:14.2411882Z ``get_state_dict`` can process any module that is parallelized by PyTorch 2025-03-17T18:45:14.2412477Z FSDP/fully_shard, DDP/replicate, tensor_parallel/parallelize_module, and any 2025-03-17T18:45:14.2413129Z combination of these parallelisms. The main functions of ``get_state_dict`` 2025-03-17T18:45:14.2413756Z are: 1.) returning a model and optimizer state_dict that can be resharded 2025-03-17T18:45:14.2414322Z with a different number of trainers and/or different parallelisms. 2025-03-17T18:45:14.2414928Z 2.) hiding the parallelism-specific state_dict APIs. Users don't have to call 2025-03-17T18:45:14.2415395Z these APIs. 2025-03-17T18:45:14.2415661Z 3.) sanity checking the result state_dict. 2025-03-17T18:45:14.2415909Z 2025-03-17T18:45:14.2416123Z The keys of the result state dictionary are the canonical FQNs (Fully 2025-03-17T18:45:14.2416705Z Qualified Names). A canonical FQN refers to the FQN based on a parameter's 2025-03-17T18:45:14.2417319Z position in an nn.Module hierarchy. More specifically, a canonical FQN to a 2025-03-17T18:45:14.2417902Z parameter is the FQN returned by ``module.named_parameters()`` or 2025-03-17T18:45:14.2418455Z ``module.named_buffers()`` when the module is not distributed by any 2025-03-17T18:45:14.2419055Z parallelisms. Since the optimizer internally uses parameter IDs to represent 2025-03-17T18:45:14.2419667Z a parameter, there will be a conversion from the parameter IDs to the 2025-03-17T18:45:14.2420133Z canonical FQNs when calling this API. 2025-03-17T18:45:14.2420367Z 2025-03-17T18:45:14.2420720Z ``get_state_dict`` can also process a module that is not parallelized. In 2025-03-17T18:45:14.2421311Z such a case, ``get_state_dict`` only performs one function -- converting the 2025-03-17T18:45:14.2421821Z optimizer parameter IDs to the canonical FQNs. 2025-03-17T18:45:14.2422093Z 2025-03-17T18:45:14.2422186Z Example: 2025-03-17T18:45:14.2422424Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.2422704Z >>> import torch 2025-03-17T18:45:14.2423110Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:14.2423686Z >>> from torch.nn.parallel import DistributedDataParallel as DDP 2025-03-17T18:45:14.2424257Z >>> from torch.distributed.checkpoint.state_dict import get_state_dict 2025-03-17T18:45:14.2424608Z 2025-03-17T18:45:14.2424755Z >>> fsdp_model = FSDP(copy.deepcopy(model)) 2025-03-17T18:45:14.2425197Z >>> fsdp_optim = torch.optim.Adam(model.parameters(), lr=1e-3) 2025-03-17T18:45:14.2425637Z >>> ddp_model = DDP(copy.deepcopy(model)) 2025-03-17T18:45:14.2426072Z >>> ddp_optim = torch.optim.Adam(model.parameters(), lr=1e-3) 2025-03-17T18:45:14.2426376Z 2025-03-17T18:45:14.2426380Z 2025-03-17T18:45:14.2426720Z >>> ddp_state_dict, ddp_optim_state_dict = get_state_dict(ddp_model, ddp_optim) 2025-03-17T18:45:14.2427273Z >>> fsdp_state_dict, fsdp_optim_state_dict = get_state_dict( 2025-03-17T18:45:14.2427686Z ... fsdp_model, fsdp_optim 2025-03-17T18:45:14.2427992Z ... ) 2025-03-17T18:45:14.2428116Z 2025-03-17T18:45:14.2428350Z >>> # if we simply call ddp_model.state_dict() and fsdp_model.state_dict(), 2025-03-17T18:45:14.2428864Z >>> # the asserts will fail. 2025-03-17T18:45:14.2429201Z >>> assert ddp_state_dict == fsdp_state_dict 2025-03-17T18:45:14.2429604Z >>> assert ddp_optim_state == fsdp_optim_state_dict 2025-03-17T18:45:14.2429881Z 2025-03-17T18:45:14.2429885Z 2025-03-17T18:45:14.2429973Z Args: 2025-03-17T18:45:14.2430247Z model (nn.Module): the nn.Module to the model. 2025-03-17T18:45:14.2430709Z optimizers (Union[None, Optimizer, Iterable[Optimizer]]): 2025-03-17T18:45:14.2431185Z The optimizers that are used to optimize ``model``. 2025-03-17T18:45:14.2431756Z submodules (deprecated): Optional[set[nn.Module]]: only return the model parameters 2025-03-17T18:45:14.2432293Z that belong to the submodules. 2025-03-17T18:45:14.2432699Z options (StateDictOptions): the options to control how 2025-03-17T18:45:14.2433204Z model state_dict and optimizer state_dict should be returned. See 2025-03-17T18:45:14.2433669Z `StateDictOptions` for the details. 2025-03-17T18:45:14.2433918Z 2025-03-17T18:45:14.2434008Z Returns: 2025-03-17T18:45:14.2434340Z ``Tuple`` that contain model state_dict and optimizer state_dict. 2025-03-17T18:45:14.2434658Z 2025-03-17T18:45:14.2434901Z :rtype: typing.Tuple[typing.Dict[str, ValueType], OptimizerStateType] 2025-03-17T18:45:14.2435255Z 2025-03-17T18:45:14.2435528Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2435917Z 2025-03-17T18:45:14.2446970Z msg = Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_loader.py line=62. 2025-03-17T18:45:14.2447982Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2448384Z 2025-03-17T18:45:14.2448586Z Load a checkpoint into a distributed state dict in SPMD style. 2025-03-17T18:45:14.2448911Z 2025-03-17T18:45:14.2449139Z Each rank must have the same keys in their ``state_dict`` provided to this 2025-03-17T18:45:14.2449746Z API. Mismatched keys may result in hangs or errors. If unsure, you can use 2025-03-17T18:45:14.2450334Z the ``utils._assert_same_keys`` API to check (but may incur communication 2025-03-17T18:45:14.2450780Z costs). 2025-03-17T18:45:14.2450915Z 2025-03-17T18:45:14.2451101Z Each rank will try to read the least amount of data necessary 2025-03-17T18:45:14.2451770Z to fulfill the requested `state_dict`. When loading :class:`ShardedTensor` 2025-03-17T18:45:14.2452394Z or :class:`DTensor` instances, each rank only reads data for their local shards. 2025-03-17T18:45:14.2452768Z 2025-03-17T18:45:14.2453045Z For each ``Stateful`` object (having both a ``state_dict`` and a ``load_state_dict``), 2025-03-17T18:45:14.2453700Z load will first call ``state_dict`` before attempting deserialization, followed by 2025-03-17T18:45:14.2454265Z ``load_state_dict`` once the deserialization is complete. 2025-03-17T18:45:14.2454819Z For each non-``Stateful`` object, load will deserailize the object, and then replace 2025-03-17T18:45:14.2455364Z it in the ``state_dict`` with the deserialized object. 2025-03-17T18:45:14.2455629Z 2025-03-17T18:45:14.2455740Z .. warning:: 2025-03-17T18:45:14.2456050Z All tensors in ``state_dict`` must be allocated on their 2025-03-17T18:45:14.2456508Z destination device *prior to* calling this function. 2025-03-17T18:45:14.2456788Z 2025-03-17T18:45:14.2457035Z All non-tensor data is loaded using `torch.load()` and modified in place 2025-03-17T18:45:14.2457491Z on state_dict. 2025-03-17T18:45:14.2457636Z 2025-03-17T18:45:14.2457736Z .. warning:: 2025-03-17T18:45:14.2458098Z Users must call `load_state_dict` on the root module to ensure load 2025-03-17T18:45:14.2458619Z pos-processing and non-tensor data properly propagates. 2025-03-17T18:45:14.2458919Z 2025-03-17T18:45:14.2459019Z .. note: 2025-03-17T18:45:14.2459377Z If no process group is initialized, this function will assume the intent 2025-03-17T18:45:14.2459963Z is to load a checkpoint into the local process. This can be useful in the 2025-03-17T18:45:14.2460668Z case of local inference, and when using regular Tensors (as opposed to DTensor 2025-03-17T18:45:14.2461144Z or ShardedTensor) 2025-03-17T18:45:14.2461317Z 2025-03-17T18:45:14.2461407Z .. note: 2025-03-17T18:45:14.2461677Z Rank 0 is assumed to be the coordinator rank. 2025-03-17T18:45:14.2461938Z 2025-03-17T18:45:14.2462027Z Args: 2025-03-17T18:45:14.2462374Z state_dict (Dict[str, Any]): The state_dict to load the checkpoint into. 2025-03-17T18:45:14.2462860Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:14.2463336Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:14.2463884Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:14.2464392Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:14.2464789Z (Default: ``None``) 2025-03-17T18:45:14.2465110Z storage_reader (Optional[StorageReader]): 2025-03-17T18:45:14.2465577Z Instance of StorageWriter used to perform reads. If this is not 2025-03-17T18:45:14.2466117Z specified, DCP will automatically infer the reader based on the 2025-03-17T18:45:14.2466725Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:14.2467174Z be raised. (Default: ``None``) 2025-03-17T18:45:14.2467519Z planner (Optional[LoadPlanner]): 2025-03-17T18:45:14.2467953Z Instance of LoadPlanner. If this is not specificed, the default 2025-03-17T18:45:14.2468416Z planner will be used. (Default: ``None``) 2025-03-17T18:45:14.2468798Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:14.2469229Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:14.2469638Z (Default: ``None``) 2025-03-17T18:45:14.2470036Z no_dist (bool): If ``True``, this function will assume the intent is to load 2025-03-17T18:45:14.2470621Z a checkpoint without using cross-rank synchronization. (Default: ``False``) 2025-03-17T18:45:14.2471095Z Returns: 2025-03-17T18:45:14.2471302Z None. 2025-03-17T18:45:14.2471433Z 2025-03-17T18:45:14.2471522Z Examples 2025-03-17T18:45:14.2471751Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.2472035Z >>> my_model = MyModule() 2025-03-17T18:45:14.2472366Z >>> optimizer = Adagrad(my_model.parameters()) 2025-03-17T18:45:14.2472819Z >>> model_state_dict = my_model.state_dict() 2025-03-17T18:45:14.2473303Z >>> fs_storage_reader = torch.distributed.checkpoint.FileSystemReader( 2025-03-17T18:45:14.2473767Z ... "/checkpoint/1" 2025-03-17T18:45:14.2474039Z ... ) 2025-03-17T18:45:14.2474156Z 2025-03-17T18:45:14.2474330Z >>> torch.distributed.checkpoint.load_state_dict( 2025-03-17T18:45:14.2474719Z >>> state_dict=model_state_dict, 2025-03-17T18:45:14.2475069Z >>> storage_reader=fs_storage_reader, 2025-03-17T18:45:14.2475394Z >>> ) 2025-03-17T18:45:14.2475515Z 2025-03-17T18:45:14.2475731Z >>> # module.load_state_dict() function might have customized steps 2025-03-17T18:45:14.2476192Z >>> # to flush the state_dict, must call it to 2025-03-17T18:45:14.2476551Z >>> # ensure correct behavior. 2025-03-17T18:45:14.2476898Z >>> my_model.load_state_dict(model_state_dict) 2025-03-17T18:45:14.2477142Z 2025-03-17T18:45:14.2477249Z .. note:: 2025-03-17T18:45:14.2477587Z load_state_dict uses collectives to coordinate reads across ranks. 2025-03-17T18:45:14.2478433Z For NCCL-based process groups, internal tensor representations of 2025-03-17T18:45:14.2479289Z objects must be moved to the GPU device before communication takes place. 2025-03-17T18:45:14.2480292Z In this case, the device used is given by ``torch.cuda.current_device()`` 2025-03-17T18:45:14.2480876Z and it is the user's responsibility to ensure that this is set so that each 2025-03-17T18:45:14.2481417Z rank has an individual GPU, via ``torch.cuda.set_device()``. 2025-03-17T18:45:14.2481731Z 2025-03-17T18:45:14.2482117Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2482506Z 2025-03-17T18:45:14.2483155Z msg = Cannot scrape callname=save in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_saver.py line=85. 2025-03-17T18:45:14.2484151Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2484723Z 2025-03-17T18:45:14.2484858Z Save a distributed model in SPMD style. 2025-03-17T18:45:14.2485088Z 2025-03-17T18:45:14.2485332Z This function is different from ``torch.save()`` as it handles 2025-03-17T18:45:14.2486280Z ``ShardedTensor`` , and ``DTensor`` by having each rank only save their local shards. 2025-03-17T18:45:14.2486661Z 2025-03-17T18:45:14.2486937Z For each ``Stateful`` object (having both a ``state_dict`` and a ``load_state_dict``), 2025-03-17T18:45:14.2487484Z save will call ``state_dict`` before serialization. 2025-03-17T18:45:14.2487749Z 2025-03-17T18:45:14.2487867Z .. warning:: 2025-03-17T18:45:14.2488246Z There is no guarantees of Backwards Compatibility across PyTorch versions 2025-03-17T18:45:14.2488724Z for saved state_dicts. 2025-03-17T18:45:14.2488915Z 2025-03-17T18:45:14.2489007Z .. warning:: 2025-03-17T18:45:14.2489364Z If using the `process_group` argument, make sure that only its ranks 2025-03-17T18:45:14.2489917Z call `save_state_dict` and that all data in state_dict belong to it. 2025-03-17T18:45:14.2490263Z 2025-03-17T18:45:14.2490358Z .. note:: 2025-03-17T18:45:14.2490759Z When saving checkpoint for FSDP's `ShardingStrategy.HYBRID_SHARD`, only one of 2025-03-17T18:45:14.2491417Z the shard_group should be calling `save_state_dict` and the corresponding process 2025-03-17T18:45:14.2491933Z group needs to be passed in. 2025-03-17T18:45:14.2492149Z 2025-03-17T18:45:14.2492236Z .. note:: 2025-03-17T18:45:14.2492648Z If no process group is available, this function assumes the intention is to save the 2025-03-17T18:45:14.2493168Z state_dict in the local process. 2025-03-17T18:45:14.2493401Z 2025-03-17T18:45:14.2493491Z .. note: 2025-03-17T18:45:14.2493756Z Rank 0 is assumed to be the coordinator rank. 2025-03-17T18:45:14.2494008Z 2025-03-17T18:45:14.2494021Z 2025-03-17T18:45:14.2494108Z Args: 2025-03-17T18:45:14.2494392Z state_dict (Dict[str, Any]): The state_dict to save. 2025-03-17T18:45:14.2494884Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:14.2495371Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:14.2495924Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:14.2496440Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:14.2496842Z (Default: ``None``) 2025-03-17T18:45:14.2497168Z storage_writer (Optional[StorageWriter]): 2025-03-17T18:45:14.2497639Z Instance of StorageWriter used to perform writes. If this is not 2025-03-17T18:45:14.2498194Z specified, DCP will automatically infer the writer based on the 2025-03-17T18:45:14.2498742Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:14.2499180Z be raised. (Default: ``None``) 2025-03-17T18:45:14.2499525Z planner (Optional[SavePlanner]): 2025-03-17T18:45:14.2499966Z Instance of SavePlanner. If this is not specificed, the default 2025-03-17T18:45:14.2500437Z planner will be used. (Default: ``None``) 2025-03-17T18:45:14.2500825Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:14.2501265Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:14.2501677Z (Default: ``None``) 2025-03-17T18:45:14.2501961Z no_dist (bool): 2025-03-17T18:45:14.2502293Z If ``True``, this function will assume the intent is to load 2025-03-17T18:45:14.2502771Z a checkpoint without using cross-rank synchronization. 2025-03-17T18:45:14.2503174Z (Default: ``False``) 2025-03-17T18:45:14.2503413Z 2025-03-17T18:45:14.2503519Z Returns: 2025-03-17T18:45:14.2503817Z Metadata: Metadata object for the saved checkpoint. 2025-03-17T18:45:14.2504095Z 2025-03-17T18:45:14.2504205Z Example: 2025-03-17T18:45:14.2504448Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.2504741Z >>> my_model = MyModule() 2025-03-17T18:45:14.2504930Z 2025-03-17T18:45:14.2505064Z >>> state_dict = {"model": my_model} 2025-03-17T18:45:14.2505288Z 2025-03-17T18:45:14.2505539Z >>> fs_storage_writer = torch.distributed.checkpoint.FileSystemWriter( 2025-03-17T18:45:14.2506013Z ... "/checkpoint/1" 2025-03-17T18:45:14.2506290Z ... ) 2025-03-17T18:45:14.2506647Z >>> torch.distributed.checkpoint.save( 2025-03-17T18:45:14.2507015Z >>> state_dict=state_dict, 2025-03-17T18:45:14.2507353Z >>> storage_writer=fs_storage_writer, 2025-03-17T18:45:14.2507690Z >>> ) 2025-03-17T18:45:14.2507831Z 2025-03-17T18:45:14.2507924Z .. note:: 2025-03-17T18:45:14.2508280Z save_state_dict uses collectives to coordinate writes across ranks. 2025-03-17T18:45:14.2508859Z For NCCL-based process groups, internal tensor representations of 2025-03-17T18:45:14.2509440Z objects must be moved to the GPU device before communication takes place. 2025-03-17T18:45:14.2510033Z In this case, the device used is given by ``torch.cuda.current_device()`` 2025-03-17T18:45:14.2510608Z and it is the user's responsibility to ensure that this is set so that 2025-03-17T18:45:14.2511152Z each rank has an individual GPU, via ``torch.cuda.set_device()``. 2025-03-17T18:45:14.2511485Z 2025-03-17T18:45:14.2511743Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2512132Z 2025-03-17T18:45:14.2512793Z msg = Cannot scrape callname=async_save in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_saver.py line=195. 2025-03-17T18:45:14.2513823Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2514494Z Asynchronous version of ``save``. This code first de-stages the state_dict on to the 2025-03-17T18:45:14.2515194Z staging storage (defaults to CPU memory), and then calls the `save` in a separate thread. 2025-03-17T18:45:14.2515622Z 2025-03-17T18:45:14.2515720Z .. warning:: 2025-03-17T18:45:14.2516039Z This feature is experimental and subject to change. 2025-03-17T18:45:14.2516401Z 2025-03-17T18:45:14.2516496Z Args: 2025-03-17T18:45:14.2516794Z state_dict (Dict[str, Any]): The state_dict to save. 2025-03-17T18:45:14.2517232Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:14.2517723Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:14.2518282Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:14.2518798Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:14.2519205Z (Default: ``None``) 2025-03-17T18:45:14.2519551Z storage_writer (Optional[StorageWriter]): 2025-03-17T18:45:14.2520033Z Instance of StorageWriter used to perform 'stage' and 'save'. If 2025-03-17T18:45:14.2520627Z this is not specified, DCP will automatically infer the writer based on the 2025-03-17T18:45:14.2521213Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:14.2521671Z be raised. (Default: ``None``) 2025-03-17T18:45:14.2522029Z planner (Optional[SavePlanner]): 2025-03-17T18:45:14.2522476Z Instance of SavePlanner. If this is not specificed, the default 2025-03-17T18:45:14.2522951Z planner will be used. (Default: ``None``) 2025-03-17T18:45:14.2523332Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:14.2523785Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:14.2524202Z (Default: ``None``) 2025-03-17T18:45:14.2524404Z 2025-03-17T18:45:14.2524554Z Returns: 2025-03-17T18:45:14.2524918Z Future: A future holding the resultant Metadata object from `save`. 2025-03-17T18:45:14.2525274Z 2025-03-17T18:45:14.2525368Z Example: 2025-03-17T18:45:14.2525621Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.2525929Z >>> my_model = MyModule() 2025-03-17T18:45:14.2526143Z 2025-03-17T18:45:14.2526265Z >>> state_dict = {"model": my_model} 2025-03-17T18:45:14.2526508Z 2025-03-17T18:45:14.2526750Z >>> fs_storage_writer = torch.distributed.checkpoint.FileSystemWriter( 2025-03-17T18:45:14.2527228Z ... "/checkpoint/1" 2025-03-17T18:45:14.2527520Z ... ) 2025-03-17T18:45:14.2536228Z >>> checkpoint_future = torch.distributed.checkpoint.async_save( 2025-03-17T18:45:14.2536694Z >>> state_dict=state_dict, 2025-03-17T18:45:14.2537254Z >>> storage_writer=fs_storage_writer, 2025-03-17T18:45:14.2537597Z >>> ) 2025-03-17T18:45:14.2537829Z >>> 2025-03-17T18:45:14.2538077Z >>> # ... do some work ... 2025-03-17T18:45:14.2538379Z >>> 2025-03-17T18:45:14.2538631Z >>> checkpoint_future.result() 2025-03-17T18:45:14.2538862Z 2025-03-17T18:45:14.2538950Z 2025-03-17T18:45:14.2539331Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2539719Z 2025-03-17T18:45:14.2604695Z msg = Cannot scrape callname=construct_and_record_rdzv_event in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/events/__init__.py line=94. 2025-03-17T18:45:14.2605791Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.2606190Z 2025-03-17T18:45:14.2606433Z Initialize rendezvous event object and record its operations. 2025-03-17T18:45:14.2606769Z 2025-03-17T18:45:14.2606856Z Args: 2025-03-17T18:45:14.2607117Z run_id (str): The run id of the rendezvous. 2025-03-17T18:45:14.2607532Z message (str): The message describing the event. 2025-03-17T18:45:14.2608074Z node_state (NodeState): The state of the node (INIT, RUNNING, SUCCEEDED, FAILED). 2025-03-17T18:45:14.2608659Z name (str): Event name. (E.g. Current action being performed). 2025-03-17T18:45:14.2609096Z hostname (str): Hostname of the node. 2025-03-17T18:45:14.2609482Z pid (Optional[int]): The process id of the node. 2025-03-17T18:45:14.2610141Z master_endpoint (str): The master endpoint for the rendezvous store, if known. 2025-03-17T18:45:14.2610805Z local_id (Optional[int]): The local_id of the node, if defined in dynamic_rendezvous.py 2025-03-17T18:45:14.2611375Z rank (Optional[int]): The rank of the node, if known. 2025-03-17T18:45:14.2611754Z Returns: 2025-03-17T18:45:14.2611980Z None 2025-03-17T18:45:14.2612198Z Example: 2025-03-17T18:45:14.2612458Z >>> # See DynamicRendezvousHandler class 2025-03-17T18:45:14.2612802Z >>> def _record( 2025-03-17T18:45:14.2613059Z ... self, 2025-03-17T18:45:14.2613297Z ... message: str, 2025-03-17T18:45:14.2613621Z ... node_state: NodeState = NodeState.RUNNING, 2025-03-17T18:45:14.2614001Z ... rank: Optional[int] = None, 2025-03-17T18:45:14.2614323Z ... ) -> None: 2025-03-17T18:45:14.2614600Z ... construct_and_record_rdzv_event( 2025-03-17T18:45:14.2615010Z ... name=f"{self.__class__.__name__}.{get_method_name()}", 2025-03-17T18:45:14.2615433Z ... run_id=self._settings.run_id, 2025-03-17T18:45:14.2615790Z ... message=message, 2025-03-17T18:45:14.2616101Z ... node_state=node_state, 2025-03-17T18:45:14.2616444Z ... hostname=self._this_node.addr, 2025-03-17T18:45:14.2616795Z ... pid=self._this_node.pid, 2025-03-17T18:45:14.2617150Z ... local_id=self._this_node.local_id, 2025-03-17T18:45:14.2617496Z ... rank=rank, 2025-03-17T18:45:14.2617765Z ... ) 2025-03-17T18:45:14.2617896Z 2025-03-17T18:45:14.2618170Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.2618590Z 2025-03-17T18:45:14.4481892Z msg = Cannot scrape callname=MixedPrecision in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py line=114. 2025-03-17T18:45:14.4483033Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4483424Z 2025-03-17T18:45:14.4483678Z This configures FSDP-native mixed precision training. 2025-03-17T18:45:14.4483978Z 2025-03-17T18:45:14.4484076Z Attributes: 2025-03-17T18:45:14.4484460Z param_dtype (Optional[torch.dtype]): This specifies the dtype for model 2025-03-17T18:45:14.4485043Z parameters during forward and backward and thus the dtype for 2025-03-17T18:45:14.4485787Z forward and backward computation. Outside forward and backward, the 2025-03-17T18:45:14.4486340Z *sharded* parameters are kept in full precision (e.g. for the 2025-03-17T18:45:14.4486879Z optimizer step), and for model checkpointing, the parameters are 2025-03-17T18:45:14.4487388Z always saved in full precision. (Default: ``None``) 2025-03-17T18:45:14.4487895Z reduce_dtype (Optional[torch.dtype]): This specifies the dtype for 2025-03-17T18:45:14.4488461Z gradient reduction (i.e. reduce-scatter or all-reduce). If this is 2025-03-17T18:45:14.4488993Z ``None`` but ``param_dtype`` is not ``None``, then this takes on 2025-03-17T18:45:14.4489512Z the ``param_dtype`` value, still running gradient reduction in low 2025-03-17T18:45:14.4490070Z precision. This is permitted to differ from ``param_dtype``, e.g. 2025-03-17T18:45:14.4490743Z to force gradient reduction to run in full precision. (Default: 2025-03-17T18:45:14.4491210Z ``None``) 2025-03-17T18:45:14.4491586Z buffer_dtype (Optional[torch.dtype]): This specifies the dtype for 2025-03-17T18:45:14.4492485Z buffers. FSDP does not shard buffers. Rather, FSDP casts them to 2025-03-17T18:45:14.4493080Z ``buffer_dtype`` in the first forward pass and keeps them in that 2025-03-17T18:45:14.4493629Z dtype thereafter. For model checkpointing, the buffers are saved 2025-03-17T18:45:14.4494160Z in full precision except for ``LOCAL_STATE_DICT``. (Default: 2025-03-17T18:45:14.4494566Z ``None``) 2025-03-17T18:45:14.4494927Z keep_low_precision_grads (bool): If ``False``, then FSDP upcasts 2025-03-17T18:45:14.4495601Z gradients to full precision after the backward pass in preparation 2025-03-17T18:45:14.4496172Z for the optimizer step. If ``True``, then FSDP keeps the gradients 2025-03-17T18:45:14.4496719Z in the dtype used for gradient reduction, which can save memory if 2025-03-17T18:45:14.4497272Z using a custom optimizer that supports running in low precision. 2025-03-17T18:45:14.4497720Z (Default: ``False``) 2025-03-17T18:45:14.4498135Z cast_forward_inputs (bool): If ``True``, then this FSDP module casts 2025-03-17T18:45:14.4498692Z its forward args and kwargs to ``param_dtype``. This is to ensure 2025-03-17T18:45:14.4499249Z that parameter and input dtypes match for forward computation, as 2025-03-17T18:45:14.4499804Z required by many ops. This may need to be set to ``True`` when only 2025-03-17T18:45:14.4500372Z applying mixed precision to some but not all FSDP modules, in which 2025-03-17T18:45:14.4500943Z case a mixed-precision FSDP submodule needs to recast its inputs. 2025-03-17T18:45:14.4501391Z (Default: ``False``) 2025-03-17T18:45:14.4501807Z cast_root_forward_inputs (bool): If ``True``, then the root FSDP module 2025-03-17T18:45:14.4502366Z casts its forward args and kwargs to ``param_dtype``, overriding 2025-03-17T18:45:14.4502890Z the value of ``cast_forward_inputs``. For non-root FSDP modules, 2025-03-17T18:45:14.4503361Z this does not do anything. (Default: ``True``) 2025-03-17T18:45:14.4503848Z _module_classes_to_ignore: (Sequence[Type[nn.Module]]): This specifies 2025-03-17T18:45:14.4504392Z module classes to ignore for mixed precision when using an 2025-03-17T18:45:14.4504937Z ``auto_wrap_policy``: Modules of these classes will have FSDP 2025-03-17T18:45:14.4505469Z applied to them separately with mixed precision disabled (meaning 2025-03-17T18:45:14.4506012Z that the final FSDP construction would deviate from the specified 2025-03-17T18:45:14.4506632Z policy). If ``auto_wrap_policy`` is not specified, then this does 2025-03-17T18:45:14.4507169Z not do anything. This API is experimental and subject to change. 2025-03-17T18:45:14.4507612Z (Default: ``(_BatchNorm,)``) 2025-03-17T18:45:14.4507836Z 2025-03-17T18:45:14.4508025Z .. note:: This API is experimental and subject to change. 2025-03-17T18:45:14.4508384Z 2025-03-17T18:45:14.4508616Z .. note:: Only floating point tensors are cast to their specified dtypes. 2025-03-17T18:45:14.4508974Z 2025-03-17T18:45:14.4509164Z .. note:: In ``summon_full_params``, parameters are forced to full 2025-03-17T18:45:14.4509598Z precision, but buffers are not. 2025-03-17T18:45:14.4509824Z 2025-03-17T18:45:14.4510036Z .. note:: Layer norm and batch norm accumulate in ``float32`` even when 2025-03-17T18:45:14.4510592Z their inputs are in a low precision like ``float16`` or ``bfloat16``. 2025-03-17T18:45:14.4511171Z Disabling FSDP's mixed precision for those norm modules only means that 2025-03-17T18:45:14.4511755Z the affine parameters are kept in ``float32``. However, this incurs 2025-03-17T18:45:14.4512342Z separate all-gathers and reduce-scatters for those norm modules, which 2025-03-17T18:45:14.4512937Z may be inefficient, so if the workload permits, the user should prefer 2025-03-17T18:45:14.4513444Z to still apply mixed precision to those modules. 2025-03-17T18:45:14.4513723Z 2025-03-17T18:45:14.4513934Z .. note:: By default, if the user passes a model with any ``_BatchNorm`` 2025-03-17T18:45:14.4514483Z modules and specifies an ``auto_wrap_policy``, then the batch norm 2025-03-17T18:45:14.4515055Z modules will have FSDP applied to them separately with mixed precision 2025-03-17T18:45:14.4515594Z disabled. See the ``_module_classes_to_ignore`` argument. 2025-03-17T18:45:14.4515898Z 2025-03-17T18:45:14.4516112Z .. note:: ``MixedPrecision`` has ``cast_root_forward_inputs=True`` and 2025-03-17T18:45:14.4516735Z ``cast_forward_inputs=False`` by default. For the root FSDP instance, 2025-03-17T18:45:14.4517263Z its ``cast_root_forward_inputs`` takes precedence over its 2025-03-17T18:45:14.4517748Z ``cast_forward_inputs``. For non-root FSDP instances, their 2025-03-17T18:45:14.4518283Z ``cast_root_forward_inputs`` values are ignored. The default setting is 2025-03-17T18:45:14.4518871Z sufficient for the typical case where each FSDP instance has the same 2025-03-17T18:45:14.4519460Z ``MixedPrecision`` configuration and only needs to cast inputs to the 2025-03-17T18:45:14.4520010Z ``param_dtype`` at the beginning of the model's forward pass. 2025-03-17T18:45:14.4520317Z 2025-03-17T18:45:14.4520542Z .. note:: For nested FSDP instances with different ``MixedPrecision`` 2025-03-17T18:45:14.4521122Z configurations, we recommend setting individual ``cast_forward_inputs`` 2025-03-17T18:45:14.4521703Z values to configure casting inputs or not before each instance's 2025-03-17T18:45:14.4522240Z forward. In such a case, since the casts happen before each FSDP 2025-03-17T18:45:14.4522786Z instance's forward, a parent FSDP instance should have its non-FSDP 2025-03-17T18:45:14.4523364Z submodules run before its FSDP submodules to avoid the activation dtype 2025-03-17T18:45:14.4523943Z being changed due to a different ``MixedPrecision`` configuration. 2025-03-17T18:45:14.4524275Z 2025-03-17T18:45:14.4524384Z Example:: 2025-03-17T18:45:14.4524521Z 2025-03-17T18:45:14.4524669Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4525084Z >>> model = nn.Sequential(nn.Linear(3, 3), nn.Linear(3, 3)) 2025-03-17T18:45:14.4525515Z >>> model[1] = FSDP( 2025-03-17T18:45:14.4525803Z >>> model[1], 2025-03-17T18:45:14.4526288Z >>> mixed_precision=MixedPrecision(param_dtype=torch.float16, cast_forward_inputs=True), 2025-03-17T18:45:14.4526817Z >>> ) 2025-03-17T18:45:14.4527056Z >>> model = FSDP( 2025-03-17T18:45:14.4527325Z >>> model, 2025-03-17T18:45:14.4527802Z >>> mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, cast_forward_inputs=True), 2025-03-17T18:45:14.4528328Z >>> ) 2025-03-17T18:45:14.4528475Z 2025-03-17T18:45:14.4528691Z The above shows a working example. On the other hand, if ``model[1]`` 2025-03-17T18:45:14.4529277Z were replaced with ``model[0]``, meaning that the submodule using 2025-03-17T18:45:14.4529836Z different ``MixedPrecision`` ran its forward first, then ``model[1]`` 2025-03-17T18:45:14.4530417Z would incorrectly see ``float16`` activations instead of ``bfloat16`` 2025-03-17T18:45:14.4530861Z ones. 2025-03-17T18:45:14.4530990Z 2025-03-17T18:45:14.4530994Z 2025-03-17T18:45:14.4531267Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4531640Z 2025-03-17T18:45:14.4532270Z msg = Cannot scrape callname=FullStateDictConfig in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py line=295. 2025-03-17T18:45:14.4533256Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4533653Z 2025-03-17T18:45:14.4533865Z ``FullStateDictConfig`` is a config class meant to be used with 2025-03-17T18:45:14.4534401Z ``StateDictType.FULL_STATE_DICT``. We recommend enabling both 2025-03-17T18:45:14.4534940Z ``offload_to_cpu=True`` and ``rank0_only=True`` when saving full state 2025-03-17T18:45:14.4535505Z dicts to save GPU memory and CPU memory, respectively. This config class 2025-03-17T18:45:14.4536065Z is meant to be used via the :func:`state_dict_type` context manager as 2025-03-17T18:45:14.4536490Z follows: 2025-03-17T18:45:14.4536615Z 2025-03-17T18:45:14.4536925Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4537426Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:14.4537929Z >>> fsdp = FSDP(model, auto_wrap_policy=...) 2025-03-17T18:45:14.4538390Z >>> cfg = FullStateDictConfig(offload_to_cpu=True, rank0_only=True) 2025-03-17T18:45:14.4539053Z >>> with FSDP.state_dict_type(fsdp, StateDictType.FULL_STATE_DICT, cfg): 2025-03-17T18:45:14.4539529Z >>> state = fsdp.state_dict() 2025-03-17T18:45:14.4539968Z >>> # `state` will be empty on non rank 0 and contain CPU tensors on rank 0. 2025-03-17T18:45:14.4540559Z >>> # To reload checkpoint for inference, finetuning, transfer learning, etc: 2025-03-17T18:45:14.4541172Z >>> model = model_fn() # Initialize model in preparation for wrapping with FSDP 2025-03-17T18:45:14.4541658Z >>> if dist.get_rank() == 0: 2025-03-17T18:45:14.4542041Z >>> # Load checkpoint only on rank 0 to avoid memory redundancy 2025-03-17T18:45:14.4542501Z >>> state_dict = torch.load("my_checkpoint.pt") 2025-03-17T18:45:14.4542903Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:14.4543389Z >>> # All ranks initialize FSDP module as usual. `sync_module_states` argument 2025-03-17T18:45:14.4544012Z >>> # communicates loaded checkpoint states from rank 0 to rest of the world. 2025-03-17T18:45:14.4544490Z >>> fsdp = FSDP( 2025-03-17T18:45:14.4544753Z ... model, 2025-03-17T18:45:14.4545044Z ... device_id=torch.cuda.current_device(), 2025-03-17T18:45:14.4545418Z ... auto_wrap_policy=..., 2025-03-17T18:45:14.4545744Z ... sync_module_states=True, 2025-03-17T18:45:14.4546056Z ... ) 2025-03-17T18:45:14.4546418Z >>> # After this point, all ranks have FSDP model with loaded checkpoint. 2025-03-17T18:45:14.4546826Z 2025-03-17T18:45:14.4546937Z Attributes: 2025-03-17T18:45:14.4547290Z rank0_only (bool): If ``True``, then only rank 0 saves the full state 2025-03-17T18:45:14.4547877Z dict, and nonzero ranks save an empty dict. If ``False``, then all 2025-03-17T18:45:14.4548376Z ranks save the full state dict. (Default: ``False``) 2025-03-17T18:45:14.4548655Z 2025-03-17T18:45:14.4548928Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4549304Z 2025-03-17T18:45:14.4623091Z msg = Cannot scrape callname=FullyShardedDataParallel.set_state_dict_type in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=639. 2025-03-17T18:45:14.4625442Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4626940Z Set the ``state_dict_type`` of all the descendant FSDP modules of the target module. 2025-03-17T18:45:14.4627646Z 2025-03-17T18:45:14.4628105Z Also takes (optional) configuration for the model's and optimizer's state dict. 2025-03-17T18:45:14.4629211Z The target module does not have to be a FSDP module. If the target 2025-03-17T18:45:14.4630294Z module is a FSDP module, its ``state_dict_type`` will also be changed. 2025-03-17T18:45:14.4630304Z 2025-03-17T18:45:14.4630698Z .. note:: This API should be called for only the top-level (root) 2025-03-17T18:45:14.4630860Z module. 2025-03-17T18:45:14.4630869Z 2025-03-17T18:45:14.4631303Z .. note:: This API enables users to transparently use the conventional 2025-03-17T18:45:14.4631671Z ``state_dict`` API to take model checkpoints in cases where the 2025-03-17T18:45:14.4632094Z root FSDP module is wrapped by another ``nn.Module``. For example, 2025-03-17T18:45:14.4632510Z the following will ensure ``state_dict`` is called on all non-FSDP 2025-03-17T18:45:14.4632980Z instances, while dispatching into `sharded_state_dict` implementation 2025-03-17T18:45:14.4633149Z for FSDP: 2025-03-17T18:45:14.4633158Z 2025-03-17T18:45:14.4633347Z Example:: 2025-03-17T18:45:14.4633355Z 2025-03-17T18:45:14.4633594Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4633808Z >>> model = DDP(FSDP(...)) 2025-03-17T18:45:14.4634014Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:14.4634192Z >>> model, 2025-03-17T18:45:14.4634568Z >>> StateDictType.SHARDED_STATE_DICT, 2025-03-17T18:45:14.4635000Z >>> state_dict_config = ShardedStateDictConfig(offload_to_cpu=True), 2025-03-17T18:45:14.4635426Z >>> optim_state_dict_config = OptimStateDictConfig(offload_to_cpu=True), 2025-03-17T18:45:14.4635607Z >>> ) 2025-03-17T18:45:14.4635994Z >>> param_state_dict = model.state_dict() 2025-03-17T18:45:14.4636326Z >>> optim_state_dict = FSDP.optim_state_dict(model, optim) 2025-03-17T18:45:14.4636335Z 2025-03-17T18:45:14.4636497Z Args: 2025-03-17T18:45:14.4636916Z module (torch.nn.Module): Root module. 2025-03-17T18:45:14.4637381Z state_dict_type (StateDictType): the desired ``state_dict_type`` to set. 2025-03-17T18:45:14.4637842Z state_dict_config (Optional[StateDictConfig]): the configuration for the 2025-03-17T18:45:14.4638042Z target ``state_dict_type``. 2025-03-17T18:45:14.4638542Z optim_state_dict_config (Optional[OptimStateDictConfig]): the configuration 2025-03-17T18:45:14.4638751Z for the optimizer state dict. 2025-03-17T18:45:14.4638761Z 2025-03-17T18:45:14.4638932Z Returns: 2025-03-17T18:45:14.4639354Z A StateDictSettings that include the previous state_dict type and 2025-03-17T18:45:14.4639580Z configuration for the module. 2025-03-17T18:45:14.4639731Z 2025-03-17T18:45:14.4640240Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4640249Z 2025-03-17T18:45:14.4641845Z msg = Cannot scrape callname=FullyShardedDataParallel.state_dict_type in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=797. 2025-03-17T18:45:14.4642498Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4642990Z Set the ``state_dict_type`` of all the descendant FSDP modules of the target module. 2025-03-17T18:45:14.4643000Z 2025-03-17T18:45:14.4643653Z This context manager has the same functions as :meth:`set_state_dict_type`. Read the document of 2025-03-17T18:45:14.4643896Z :meth:`set_state_dict_type` for the detail. 2025-03-17T18:45:14.4643904Z 2025-03-17T18:45:14.4644091Z Example:: 2025-03-17T18:45:14.4644183Z 2025-03-17T18:45:14.4644430Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4644642Z >>> model = DDP(FSDP(...)) 2025-03-17T18:45:14.4644846Z >>> with FSDP.state_dict_type( 2025-03-17T18:45:14.4645031Z >>> model, 2025-03-17T18:45:14.4645278Z >>> StateDictType.SHARDED_STATE_DICT, 2025-03-17T18:45:14.4645449Z >>> ): 2025-03-17T18:45:14.4645675Z >>> checkpoint = model.state_dict() 2025-03-17T18:45:14.4645683Z 2025-03-17T18:45:14.4645859Z Args: 2025-03-17T18:45:14.4646097Z module (torch.nn.Module): Root module. 2025-03-17T18:45:14.4646578Z state_dict_type (StateDictType): the desired ``state_dict_type`` to set. 2025-03-17T18:45:14.4647020Z state_dict_config (Optional[StateDictConfig]): the model ``state_dict`` 2025-03-17T18:45:14.4647330Z configuration for the target ``state_dict_type``. 2025-03-17T18:45:14.4647781Z optim_state_dict_config (Optional[OptimStateDictConfig]): the optimizer 2025-03-17T18:45:14.4648174Z ``state_dict`` configuration for the target ``state_dict_type``. 2025-03-17T18:45:14.4648338Z 2025-03-17T18:45:14.4648847Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4648860Z 2025-03-17T18:45:14.4692821Z msg = Cannot scrape callname=FullyShardedDataParallel.optim_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=1810. 2025-03-17T18:45:14.4693355Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4693365Z 2025-03-17T18:45:14.4694038Z Transform the state-dict of an optimizer corresponding to a sharded model. 2025-03-17T18:45:14.4694049Z 2025-03-17T18:45:14.4694429Z The given state-dict can be transformed to one of three types: 2025-03-17T18:45:14.4695023Z 1) full optimizer state_dict, 2) sharded optimizer state_dict, 3) local optimizer state_dict. 2025-03-17T18:45:14.4695036Z 2025-03-17T18:45:14.4695462Z For full optimizer state_dict, all states are unflattened and not sharded. 2025-03-17T18:45:14.4695891Z Rank0 only and CPU only can be specified via :meth:`state_dict_type` to 2025-03-17T18:45:14.4696096Z avoid OOM. 2025-03-17T18:45:14.4696104Z 2025-03-17T18:45:14.4696565Z For sharded optimizer state_dict, all states are unflattened but sharded. 2025-03-17T18:45:14.4696916Z CPU only can be specified via :meth:`state_dict_type` to further save 2025-03-17T18:45:14.4697077Z memory. 2025-03-17T18:45:14.4697084Z 2025-03-17T18:45:14.4697451Z For local state_dict, no transformation will be performed. But a state 2025-03-17T18:45:14.4697874Z will be converted from nn.Tensor to ShardedTensor to represent its sharding 2025-03-17T18:45:14.4698062Z nature (this is not supported yet). 2025-03-17T18:45:14.4698071Z 2025-03-17T18:45:14.4698247Z Example:: 2025-03-17T18:45:14.4698256Z 2025-03-17T18:45:14.4698495Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4698968Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:14.4699271Z >>> from torch.distributed.fsdp import StateDictType 2025-03-17T18:45:14.4699608Z >>> from torch.distributed.fsdp import FullStateDictConfig 2025-03-17T18:45:14.4700079Z >>> from torch.distributed.fsdp import FullOptimStateDictConfig 2025-03-17T18:45:14.4700282Z >>> # Save a checkpoint 2025-03-17T18:45:14.4700470Z >>> model, optim = ... 2025-03-17T18:45:14.4700682Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:14.4700850Z >>> model, 2025-03-17T18:45:14.4701080Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:14.4701322Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4701599Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4701756Z >>> ) 2025-03-17T18:45:14.4701971Z >>> state_dict = model.state_dict() 2025-03-17T18:45:14.4702364Z >>> optim_state_dict = FSDP.optim_state_dict(model, optim) 2025-03-17T18:45:14.4702651Z >>> save_a_checkpoint(state_dict, optim_state_dict) 2025-03-17T18:45:14.4702834Z >>> # Load a checkpoint 2025-03-17T18:45:14.4703029Z >>> model, optim = ... 2025-03-17T18:45:14.4703301Z >>> state_dict, optim_state_dict = load_a_checkpoint() 2025-03-17T18:45:14.4703516Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:14.4703669Z >>> model, 2025-03-17T18:45:14.4703900Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:14.4704143Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4704427Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4704589Z >>> ) 2025-03-17T18:45:14.4704812Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:14.4705089Z >>> optim_state_dict = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:14.4705305Z >>> model, optim, optim_state_dict 2025-03-17T18:45:14.4705453Z >>> ) 2025-03-17T18:45:14.4705699Z >>> optim.load_state_dict(optim_state_dict) 2025-03-17T18:45:14.4705707Z 2025-03-17T18:45:14.4705856Z Args: 2025-03-17T18:45:14.4706245Z model (torch.nn.Module): Root module (which may or may not be a 2025-03-17T18:45:14.4706711Z :class:`FullyShardedDataParallel` instance) whose parameters 2025-03-17T18:45:14.4706975Z were passed into the optimizer ``optim``. 2025-03-17T18:45:14.4707314Z optim (torch.optim.Optimizer): Optimizer for ``model`` 's 2025-03-17T18:45:14.4707498Z parameters. 2025-03-17T18:45:14.4707899Z optim_state_dict (Dict[str, Any]): the target optimizer state_dict to 2025-03-17T18:45:14.4708417Z transform. If the value is None, optim.state_dict() will be used. ( 2025-03-17T18:45:14.4708597Z Default: ``None``) 2025-03-17T18:45:14.4709050Z group (dist.ProcessGroup): Model's process group across which parameters 2025-03-17T18:45:14.4709395Z are sharded or ``None`` if using the default process group. ( 2025-03-17T18:45:14.4709571Z Default: ``None``) 2025-03-17T18:45:14.4709594Z 2025-03-17T18:45:14.4709731Z Returns: 2025-03-17T18:45:14.4709953Z Dict[str, Any]: A :class:`dict` containing the optimizer state for 2025-03-17T18:45:14.4710141Z ``model``. The sharding of the optimizer state is based on 2025-03-17T18:45:14.4710248Z ``state_dict_type``. 2025-03-17T18:45:14.4710265Z 2025-03-17T18:45:14.4710528Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4710532Z 2025-03-17T18:45:14.4711443Z msg = Cannot scrape callname=FullyShardedDataParallel.optim_state_dict_to_load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=1908. 2025-03-17T18:45:14.4711714Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.4711719Z 2025-03-17T18:45:14.4712097Z Convert an optimizer state-dict so that it can be loaded into the optimizer associated with the FSDP model. 2025-03-17T18:45:14.4712105Z 2025-03-17T18:45:14.4712288Z Given a ``optim_state_dict`` that is transformed through 2025-03-17T18:45:14.4712508Z :meth:`optim_state_dict`, it gets converted to the flattened optimizer 2025-03-17T18:45:14.4712738Z state_dict that can be loaded to ``optim`` which is the optimizer for 2025-03-17T18:45:14.4712986Z ``model``. ``model`` must be sharded by FullyShardedDataParallel. 2025-03-17T18:45:14.4712990Z 2025-03-17T18:45:14.4713138Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.4713383Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:14.4713560Z >>> from torch.distributed.fsdp import StateDictType 2025-03-17T18:45:14.4713747Z >>> from torch.distributed.fsdp import FullStateDictConfig 2025-03-17T18:45:14.4713969Z >>> from torch.distributed.fsdp import FullOptimStateDictConfig 2025-03-17T18:45:14.4714075Z >>> # Save a checkpoint 2025-03-17T18:45:14.4714194Z >>> model, optim = ... 2025-03-17T18:45:14.4714345Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:14.4714439Z >>> model, 2025-03-17T18:45:14.4714578Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:14.4714714Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4714878Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4714971Z >>> ) 2025-03-17T18:45:14.4715105Z >>> state_dict = model.state_dict() 2025-03-17T18:45:14.4715226Z >>> original_osd = optim.state_dict() 2025-03-17T18:45:14.4715374Z >>> optim_state_dict = FSDP.optim_state_dict( 2025-03-17T18:45:14.4715468Z >>> model, 2025-03-17T18:45:14.4715579Z >>> optim, 2025-03-17T18:45:14.4715701Z >>> optim_state_dict=original_osd 2025-03-17T18:45:14.4715804Z >>> ) 2025-03-17T18:45:14.4715956Z >>> save_a_checkpoint(state_dict, optim_state_dict) 2025-03-17T18:45:14.4716075Z >>> # Load a checkpoint 2025-03-17T18:45:14.4716179Z >>> model, optim = ... 2025-03-17T18:45:14.4716353Z >>> state_dict, optim_state_dict = load_a_checkpoint() 2025-03-17T18:45:14.4716467Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:14.4716574Z >>> model, 2025-03-17T18:45:14.4716699Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:14.4716847Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4716999Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:14.4717105Z >>> ) 2025-03-17T18:45:14.4717230Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:14.4717397Z >>> optim_state_dict = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:14.4717518Z >>> model, optim, optim_state_dict 2025-03-17T18:45:14.4717622Z >>> ) 2025-03-17T18:45:14.4717827Z >>> optim.load_state_dict(optim_state_dict) 2025-03-17T18:45:14.4717833Z 2025-03-17T18:45:14.4717933Z Args: 2025-03-17T18:45:14.4718137Z model (torch.nn.Module): Root module (which may or may not be a 2025-03-17T18:45:14.4718361Z :class:`FullyShardedDataParallel` instance) whose parameters 2025-03-17T18:45:14.4718501Z were passed into the optimizer ``optim``. 2025-03-17T18:45:14.4718702Z optim (torch.optim.Optimizer): Optimizer for ``model`` 's 2025-03-17T18:45:14.4718839Z parameters. 2025-03-17T18:45:14.4719204Z optim_state_dict (Dict[str, Any]): The optimizer states to be loaded. 2025-03-17T18:45:14.4719583Z is_named_optimizer (bool): Is this optimizer a NamedOptimizer or 2025-03-17T18:45:14.4719930Z KeyedOptimizer. Only set to True if ``optim`` is TorchRec's 2025-03-17T18:45:14.4720236Z KeyedOptimizer or torch.distributed's NamedOptimizer. 2025-03-17T18:45:14.4720607Z load_directly (bool): If this is set to True, this API will also 2025-03-17T18:45:14.4720974Z call optim.load_state_dict(result) before returning the result. 2025-03-17T18:45:14.4721390Z Otherwise, users are responsible to call ``optim.load_state_dict()`` 2025-03-17T18:45:14.4721570Z (Default: ``False``) 2025-03-17T18:45:14.4722042Z group (dist.ProcessGroup): Model's process group across which parameters 2025-03-17T18:45:14.4722373Z are sharded or ``None`` if using the default process group. ( 2025-03-17T18:45:14.4722568Z Default: ``None``) 2025-03-17T18:45:14.4722578Z 2025-03-17T18:45:14.4723053Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.4723141Z 2025-03-17T18:45:14.5233718Z msg = Cannot scrape callname=_RemoteModule.__init__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=128. 2025-03-17T18:45:14.5234273Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5234316Z 2025-03-17T18:45:14.5234746Z RemoteModule instance can only be created after RPC initialization. 2025-03-17T18:45:14.5234756Z 2025-03-17T18:45:14.5235132Z It creates a user-specified module on a specified remote node. 2025-03-17T18:45:14.5235577Z It behaves like a regular ``nn.Module`` except that the ``forward`` method is 2025-03-17T18:45:14.5235955Z executed on the remote node. 2025-03-17T18:45:14.5236418Z It takes care of autograd recording to ensure the backward pass propagates 2025-03-17T18:45:14.5236724Z gradients back to the corresponding remote module. 2025-03-17T18:45:14.5265241Z It can be shared across processors using `RPC framework `__, 2025-03-17T18:45:14.5265548Z without incurring any overheads of copying the actual module, 2025-03-17T18:45:14.5265777Z which is equivalent to an :class:`~torch.distributed.rpc.RRef` 2025-03-17T18:45:14.5265945Z pointing to the remote module. 2025-03-17T18:45:14.5265975Z 2025-03-17T18:45:14.5266257Z The arguments of ``forward_async`` and ``forward`` are the same as 2025-03-17T18:45:14.5266599Z the ``forward`` method of the module returned by the ``module_cls``. 2025-03-17T18:45:14.5266606Z 2025-03-17T18:45:14.5266931Z Apart from ``forward_async`` and ``forward``, no other methods are supported from nn.Module for now. 2025-03-17T18:45:14.5266943Z 2025-03-17T18:45:14.5267221Z Particularly, to create a hybrid model, typically the local modules should be 2025-03-17T18:45:14.5267605Z created outside of remote modules, rather than as submodules of any remote module (by calling ``add_module``). 2025-03-17T18:45:14.5267723Z Hybrid Example: 2025-03-17T18:45:14.5267844Z >>> class HybridModel(nn.Module): 2025-03-17T18:45:14.5267978Z >>> def __init__(self) -> None: 2025-03-17T18:45:14.5268093Z >>> nn.Module.__init__(self) 2025-03-17T18:45:14.5268253Z >>> self.remote_embedding = RemoteModule(...) 2025-03-17T18:45:14.5268522Z >>> self.local_linear = nn.Linear(...) 2025-03-17T18:45:14.5268528Z 2025-03-17T18:45:14.5268753Z For example, if ``module_cls`` returns an instance of ``nn.Linear``, 2025-03-17T18:45:14.5269010Z that has ``forward`` method signature, ``def forward(input: Tensor) -> Tensor:``, 2025-03-17T18:45:14.5269237Z the generated ``RemoteModule`` will have 2 methods in signature of 2025-03-17T18:45:14.5269382Z ``def forward(input: Tensor) -> Tensor:`` and 2025-03-17T18:45:14.5269567Z ``def forward_async(input: Tensor) -> Future[Tensor]:``. 2025-03-17T18:45:14.5269572Z 2025-03-17T18:45:14.5269672Z .. note:: 2025-03-17T18:45:14.5269839Z If the remote module is placed on a cuda device, 2025-03-17T18:45:14.5270187Z any input CPU tensors will be automatically moved to the same cuda device, 2025-03-17T18:45:14.5270919Z and GPU tensors are returned over the wire according to the device map of the remote worker on TensorPipe RPC backend. 2025-03-17T18:45:14.5270929Z 2025-03-17T18:45:14.5271076Z Args: 2025-03-17T18:45:14.5271649Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:14.5272174Z The device can be a local device or a remote device specified by one of the following remote 2025-03-17T18:45:14.5272345Z formats: 2025-03-17T18:45:14.5272359Z 2025-03-17T18:45:14.5272613Z 1. "rank:/" (ex: "rank:0/cuda:0"). 2025-03-17T18:45:14.5272902Z 2. "/" (ex: "trainer0/cuda:0"). 2025-03-17T18:45:14.5272909Z 2025-03-17T18:45:14.5273366Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:14.5273706Z module_cls (nn.Module): For example, 2025-03-17T18:45:14.5273907Z >>> class MyModule(nn.Module): 2025-03-17T18:45:14.5274114Z >>> def forward(input): 2025-03-17T18:45:14.5274295Z >>> return input + 1 2025-03-17T18:45:14.5274462Z >>> 2025-03-17T18:45:14.5274647Z >>> module_cls = MyModule 2025-03-17T18:45:14.5275040Z args (Sequence, optional): args to be passed to ``module_cls``. 2025-03-17T18:45:14.5275387Z kwargs (Dict, optional): kwargs to be passed to ``module_cls``. 2025-03-17T18:45:14.5275921Z _module_interface_cls (type, optional): The TorchScript interface type for the module 2025-03-17T18:45:14.5276456Z to be created. The type object should be decorated by @torch.jit.interface. 2025-03-17T18:45:14.5276889Z If not provided, the generated RemoteModule is not torchscript-able. 2025-03-17T18:45:14.5277320Z Warning, this is an experimental API and susceptible to frequent changes. 2025-03-17T18:45:14.5277335Z 2025-03-17T18:45:14.5277507Z Returns: 2025-03-17T18:45:14.5277974Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:14.5278432Z user-provided ``module_cls``, it has a blocking ``forward`` method and an 2025-03-17T18:45:14.5278955Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:14.5279243Z on the user-provided module on the remote side. 2025-03-17T18:45:14.5279252Z 2025-03-17T18:45:14.5279426Z Example:: 2025-03-17T18:45:14.5279727Z Run the following code in two different processes: 2025-03-17T18:45:14.5279735Z 2025-03-17T18:45:14.5279954Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.5280141Z >>> # On worker 0: 2025-03-17T18:45:14.5280296Z >>> import torch 2025-03-17T18:45:14.5280537Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5280741Z >>> from torch import nn, Tensor 2025-03-17T18:45:14.5281157Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:14.5281310Z >>> 2025-03-17T18:45:14.5281578Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:14.5281802Z >>> remote_linear_module = RemoteModule( 2025-03-17T18:45:14.5282037Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:14.5282192Z >>> ) 2025-03-17T18:45:14.5282513Z >>> input = torch.randn(128, 20) 2025-03-17T18:45:14.5282802Z >>> ret_fut = remote_linear_module.forward_async(input) 2025-03-17T18:45:14.5282989Z >>> ret = ret_fut.wait() 2025-03-17T18:45:14.5283159Z >>> rpc.shutdown() 2025-03-17T18:45:14.5283167Z 2025-03-17T18:45:14.5283340Z >>> # On worker 1: 2025-03-17T18:45:14.5283513Z >>> import torch 2025-03-17T18:45:14.5283738Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5283900Z >>> 2025-03-17T18:45:14.5284121Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:14.5284307Z >>> rpc.shutdown() 2025-03-17T18:45:14.5284321Z 2025-03-17T18:45:14.5284814Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5284833Z 2025-03-17T18:45:14.5286169Z msg = Cannot scrape callname=_RemoteModule.init_from_module_rref in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=505. 2025-03-17T18:45:14.5286691Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5286701Z 2025-03-17T18:45:14.5287224Z Besides the constructor, a RemoteModule instance can also be initialized given a module RRef. 2025-03-17T18:45:14.5287231Z 2025-03-17T18:45:14.5287566Z This alternate initialization method can be particularly useful if we want to create multiple 2025-03-17T18:45:14.5287909Z RemoteModule instances that share the same underlying module and reduce memory consumption. 2025-03-17T18:45:14.5287913Z 2025-03-17T18:45:14.5288201Z Moreover, this also provides a workaround for passing script RemoteModule over RPC, 2025-03-17T18:45:14.5288471Z which is not supported. The recommended way is as follows: 2025-03-17T18:45:14.5288476Z 2025-03-17T18:45:14.5288602Z 1. the sender creates a RemoteModule; 2025-03-17T18:45:14.5288768Z 2. the sender sends its ``module_rref`` over RPC; 2025-03-17T18:45:14.5289120Z 3. the receiver calls this method to initialize another RemoteModule using the same ``module_rref``. 2025-03-17T18:45:14.5289125Z 2025-03-17T18:45:14.5289243Z Example:: 2025-03-17T18:45:14.5289401Z Run the following code in two different processes: 2025-03-17T18:45:14.5289405Z 2025-03-17T18:45:14.5289537Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.5289674Z >>> # On worker 0: 2025-03-17T18:45:14.5289786Z >>> import torch 2025-03-17T18:45:14.5289916Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5290043Z >>> from torch import nn, Tensor 2025-03-17T18:45:14.5290270Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:14.5290380Z >>> 2025-03-17T18:45:14.5290529Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:14.5290660Z >>> remote_module = RemoteModule( 2025-03-17T18:45:14.5290792Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:14.5290896Z >>> ) 2025-03-17T18:45:14.5290989Z >>> 2025-03-17T18:45:14.5291117Z >>> remote_module1 = rpc.rpc_sync( 2025-03-17T18:45:14.5291222Z >>> "worker1/cpu", 2025-03-17T18:45:14.5291363Z >>> RemoteModule.init_from_module_rref, 2025-03-17T18:45:14.5291523Z >>> ("worker1/cpu", remote_module1.get_module_rref()), 2025-03-17T18:45:14.5291622Z >>> ) 2025-03-17T18:45:14.5291725Z >>> rpc.shutdown() 2025-03-17T18:45:14.5291729Z 2025-03-17T18:45:14.5291838Z >>> # On worker 1: 2025-03-17T18:45:14.5291933Z >>> import torch 2025-03-17T18:45:14.5292083Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5292211Z >>> 2025-03-17T18:45:14.5292457Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:14.5292619Z >>> rpc.shutdown() 2025-03-17T18:45:14.5292626Z 2025-03-17T18:45:14.5292776Z Args: 2025-03-17T18:45:14.5293337Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:14.5293886Z The device can be a local device or a remote device specified by one of the following remote 2025-03-17T18:45:14.5294154Z formats: 2025-03-17T18:45:14.5294164Z 2025-03-17T18:45:14.5294416Z 1. "rank:/" (ex: "rank:0/cuda:0"). 2025-03-17T18:45:14.5294690Z 2. "/" (ex: "trainer0/cuda:0"). 2025-03-17T18:45:14.5294699Z 2025-03-17T18:45:14.5295155Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:14.5295573Z module_rref (RRef[nn.Module]): The module reference shared by both the caller and 2025-03-17T18:45:14.5295710Z the created remote module. 2025-03-17T18:45:14.5295997Z _module_interface_cls (type, optional): The TorchScript interface type for the module 2025-03-17T18:45:14.5296264Z to be created. The type object should be decorated by @torch.jit.interface. 2025-03-17T18:45:14.5296493Z If not provided, the generated RemoteModule is not torchscript-able. 2025-03-17T18:45:14.5296745Z Warning, this is an experimental API and susceptible to frequent changes. 2025-03-17T18:45:14.5296755Z 2025-03-17T18:45:14.5296841Z Returns: 2025-03-17T18:45:14.5297089Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:14.5297326Z user-provided ``module_rref``, it has a blocking ``forward`` method and an 2025-03-17T18:45:14.5297620Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:14.5297772Z on the user-provided module on the remote side. 2025-03-17T18:45:14.5297777Z 2025-03-17T18:45:14.5298051Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5298102Z 2025-03-17T18:45:14.5298736Z msg = Cannot scrape callname=RemoteModule in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=597. 2025-03-17T18:45:14.5299019Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5299024Z 2025-03-17T18:45:14.5299264Z A RemoteModule instance can only be created after RPC initialization. 2025-03-17T18:45:14.5299269Z 2025-03-17T18:45:14.5299485Z It creates a user-specified module on a specified remote node. 2025-03-17T18:45:14.5299727Z It behaves like a regular ``nn.Module`` except that the ``forward`` method is 2025-03-17T18:45:14.5299893Z executed on the remote node. 2025-03-17T18:45:14.5300139Z It takes care of autograd recording to ensure the backward pass propagates 2025-03-17T18:45:14.5300321Z gradients back to the corresponding remote module. 2025-03-17T18:45:14.5300326Z 2025-03-17T18:45:14.5300549Z It generates two methods ``forward_async`` and ``forward`` based on the 2025-03-17T18:45:14.5300813Z signature of the ``forward`` method of ``module_cls``. ``forward_async`` 2025-03-17T18:45:14.5301241Z runs asynchronously and returns a Future. The arguments of ``forward_async`` 2025-03-17T18:45:14.5301611Z and ``forward`` are the same as the ``forward`` method of the module 2025-03-17T18:45:14.5301815Z returned by the ``module_cls``. 2025-03-17T18:45:14.5301824Z 2025-03-17T18:45:14.5302199Z For example, if ``module_cls`` returns an instance of ``nn.Linear``, 2025-03-17T18:45:14.5302654Z that has ``forward`` method signature: ``def forward(input: Tensor) -> Tensor:``, 2025-03-17T18:45:14.5303084Z the generated ``RemoteModule`` will have 2 methods with the signatures: 2025-03-17T18:45:14.5303093Z 2025-03-17T18:45:14.5303317Z | ``def forward(input: Tensor) -> Tensor:`` 2025-03-17T18:45:14.5303627Z | ``def forward_async(input: Tensor) -> Future[Tensor]:`` 2025-03-17T18:45:14.5303641Z 2025-03-17T18:45:14.5303789Z Args: 2025-03-17T18:45:14.5304323Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:14.5304961Z The format should be "/", where the device field can be parsed as torch.device type. 2025-03-17T18:45:14.5305197Z E.g., "trainer0/cpu", "trainer0", "ps0/cuda:0". 2025-03-17T18:45:14.5305580Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:14.5305841Z module_cls (nn.Module): Class for the module to be created remotely. For example, 2025-03-17T18:45:14.5305846Z 2025-03-17T18:45:14.5305977Z >>> class MyModule(nn.Module): 2025-03-17T18:45:14.5306089Z >>> def forward(input): 2025-03-17T18:45:14.5306210Z >>> return input + 1 2025-03-17T18:45:14.5306300Z >>> 2025-03-17T18:45:14.5306405Z >>> module_cls = MyModule 2025-03-17T18:45:14.5306421Z 2025-03-17T18:45:14.5306724Z args (Sequence, optional): args to be passed to ``module_cls``. 2025-03-17T18:45:14.5306939Z kwargs (Dict, optional): kwargs to be passed to ``module_cls``. 2025-03-17T18:45:14.5306944Z 2025-03-17T18:45:14.5307036Z Returns: 2025-03-17T18:45:14.5307300Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:14.5307540Z user-provided ``module_cls``, it has a blocking ``forward`` method and an 2025-03-17T18:45:14.5307832Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:14.5307981Z on the user-provided module on the remote side. 2025-03-17T18:45:14.5307986Z 2025-03-17T18:45:14.5308099Z Example:: 2025-03-17T18:45:14.5308259Z Run the following code in two different processes: 2025-03-17T18:45:14.5308264Z 2025-03-17T18:45:14.5308395Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.5308491Z >>> # On worker 0: 2025-03-17T18:45:14.5308605Z >>> import torch 2025-03-17T18:45:14.5308736Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5308913Z >>> from torch import nn, Tensor 2025-03-17T18:45:14.5309141Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:14.5309233Z >>> 2025-03-17T18:45:14.5309392Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:14.5309518Z >>> remote_linear_module = RemoteModule( 2025-03-17T18:45:14.5309669Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:14.5309759Z >>> ) 2025-03-17T18:45:14.5309884Z >>> input = torch.randn(128, 20) 2025-03-17T18:45:14.5310043Z >>> ret_fut = remote_linear_module.forward_async(input) 2025-03-17T18:45:14.5310198Z >>> ret = ret_fut.wait() 2025-03-17T18:45:14.5310300Z >>> rpc.shutdown() 2025-03-17T18:45:14.5310305Z 2025-03-17T18:45:14.5310414Z >>> # On worker 1: 2025-03-17T18:45:14.5310508Z >>> import torch 2025-03-17T18:45:14.5310650Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5310739Z >>> 2025-03-17T18:45:14.5310896Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:14.5310993Z >>> rpc.shutdown() 2025-03-17T18:45:14.5310998Z 2025-03-17T18:45:14.5311220Z Furthermore, a more practical example that is combined with 2025-03-17T18:45:14.5312046Z `DistributedDataParallel `__ (DDP) 2025-03-17T18:45:14.5312671Z can be found in this `tutorial `__. 2025-03-17T18:45:14.5312681Z 2025-03-17T18:45:14.5313160Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5313173Z 2025-03-17T18:45:14.5494409Z msg = Cannot scrape callname=DistributedOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/optimizer.py line=130. 2025-03-17T18:45:14.5494949Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5494973Z 2025-03-17T18:45:14.5495270Z DistributedOptimizer takes remote references to parameters scattered 2025-03-17T18:45:14.5495532Z across workers and applies the given optimizer locally for each parameter. 2025-03-17T18:45:14.5495537Z 2025-03-17T18:45:14.5495781Z This class uses :meth:`~torch.distributed.autograd.get_gradients` in order 2025-03-17T18:45:14.5496102Z to retrieve the gradients for specific parameters. 2025-03-17T18:45:14.5496108Z 2025-03-17T18:45:14.5496215Z Concurrent calls to 2025-03-17T18:45:14.5496443Z :meth:`~torch.distributed.optim.DistributedOptimizer.step`, 2025-03-17T18:45:14.5496595Z either from the same or different clients, will 2025-03-17T18:45:14.5496848Z be serialized on each worker -- as each worker's optimizer can only work 2025-03-17T18:45:14.5497060Z on one set of gradients at a time. However, there is no guarantee that 2025-03-17T18:45:14.5497327Z the full forward-backward-optimizer sequence will execute for one client 2025-03-17T18:45:14.5497550Z at a time. This means that the gradients being applied may not correspond 2025-03-17T18:45:14.5497796Z to the latest forward pass executed on a given worker. Also, there is no 2025-03-17T18:45:14.5497917Z guaranteed ordering across workers. 2025-03-17T18:45:14.5497921Z 2025-03-17T18:45:14.5498199Z `DistributedOptimizer` creates the local optimizer with TorchScript enabled 2025-03-17T18:45:14.5498441Z by default, so that optimizer updates are not blocked by the Python Global 2025-03-17T18:45:14.5498708Z Interpreter Lock (GIL) in the case of multithreaded training (e.g. Distributed 2025-03-17T18:45:14.5498955Z Model Parallel). This feature is currently enabled for most optimizers. You 2025-03-17T18:45:14.5499226Z can also follow `the recipe`__ in PyTorch tutorials to enable TorchScript support 2025-03-17T18:45:14.5499341Z for your own custom optimizers. 2025-03-17T18:45:14.5499345Z 2025-03-17T18:45:14.5499446Z Args: 2025-03-17T18:45:14.5499652Z optimizer_class (optim.Optimizer): the class of optimizer to 2025-03-17T18:45:14.5499817Z instantiate on each worker. 2025-03-17T18:45:14.5500066Z params_rref (list[RRef]): list of RRefs to local or remote parameters 2025-03-17T18:45:14.5500231Z to optimize. 2025-03-17T18:45:14.5500601Z args: arguments to pass to the optimizer constructor on each worker. 2025-03-17T18:45:14.5501020Z kwargs: arguments to pass to the optimizer constructor on each worker. 2025-03-17T18:45:14.5501036Z 2025-03-17T18:45:14.5501208Z Example:: 2025-03-17T18:45:14.5501432Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.5501738Z >>> import torch.distributed.autograd as dist_autograd 2025-03-17T18:45:14.5501980Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.5502274Z >>> from torch import optim 2025-03-17T18:45:14.5502641Z >>> from torch.distributed.optim import DistributedOptimizer 2025-03-17T18:45:14.5502798Z >>> 2025-03-17T18:45:14.5503050Z >>> with dist_autograd.context() as context_id: 2025-03-17T18:45:14.5503222Z >>> # Forward pass. 2025-03-17T18:45:14.5503614Z >>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3)) 2025-03-17T18:45:14.5503989Z >>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1)) 2025-03-17T18:45:14.5504230Z >>> loss = rref1.to_here() + rref2.to_here() 2025-03-17T18:45:14.5504380Z >>> 2025-03-17T18:45:14.5504562Z >>> # Backward pass. 2025-03-17T18:45:14.5504822Z >>> dist_autograd.backward(context_id, [loss.sum()]) 2025-03-17T18:45:14.5504976Z >>> 2025-03-17T18:45:14.5505134Z >>> # Optimizer. 2025-03-17T18:45:14.5505350Z >>> dist_optim = DistributedOptimizer( 2025-03-17T18:45:14.5505510Z >>> optim.SGD, 2025-03-17T18:45:14.5505696Z >>> [rref1, rref2], 2025-03-17T18:45:14.5505854Z >>> lr=0.05, 2025-03-17T18:45:14.5506023Z >>> ) 2025-03-17T18:45:14.5506228Z >>> dist_optim.step(context_id) 2025-03-17T18:45:14.5506238Z 2025-03-17T18:45:14.5506637Z __ https://github.com/pytorch/tutorials/pull/1465 2025-03-17T18:45:14.5506651Z 2025-03-17T18:45:14.5507131Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5507139Z 2025-03-17T18:45:14.5515863Z msg = Cannot scrape callname=PostLocalSGDOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/post_localSGD_optimizer.py line=9. 2025-03-17T18:45:14.5516509Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5516521Z 2025-03-17T18:45:14.5517264Z Wraps an arbitrary :class:`torch.optim.Optimizer` and runs `post-local SGD `_, 2025-03-17T18:45:14.5517538Z This optimizer runs local optimizer at every step. 2025-03-17T18:45:14.5518028Z After the warm-up stage, it averages parameters periodically afer the local optimizer is applied. 2025-03-17T18:45:14.5518036Z 2025-03-17T18:45:14.5518127Z Args: 2025-03-17T18:45:14.5518252Z optim: The local optimizer. 2025-03-17T18:45:14.5518482Z averager: A model averager instance to run post-localSGD algorithm. 2025-03-17T18:45:14.5518491Z 2025-03-17T18:45:14.5518609Z Example:: 2025-03-17T18:45:14.5518613Z 2025-03-17T18:45:14.5518781Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:14.5518881Z >>> import torch 2025-03-17T18:45:14.5519004Z >>> import torch.distributed as dist 2025-03-17T18:45:14.5519301Z >>> import torch.distributed.algorithms.model_averaging.averagers as averagers 2025-03-17T18:45:14.5519412Z >>> import torch.nn as nn 2025-03-17T18:45:14.5519630Z >>> from torch.distributed.optim import PostLocalSGDOptimizer 2025-03-17T18:45:14.5519909Z >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( 2025-03-17T18:45:14.5520029Z >>> PostLocalSGDState, 2025-03-17T18:45:14.5520135Z >>> post_localSGD_hook, 2025-03-17T18:45:14.5520243Z >>> ) 2025-03-17T18:45:14.5520333Z >>> 2025-03-17T18:45:14.5520512Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:14.5520711Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:14.5520815Z >>> ) 2025-03-17T18:45:14.5520903Z >>> 2025-03-17T18:45:14.5521065Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:14.5521368Z >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100) 2025-03-17T18:45:14.5521551Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:14.5521640Z >>> 2025-03-17T18:45:14.5521866Z >>> # Create a post-localSGD optimizer that wraps a local optimizer. 2025-03-17T18:45:14.5522121Z >>> # Note that ``warmup_steps`` used in ``PostLocalSGDOptimizer`` must be the same as 2025-03-17T18:45:14.5522342Z >>> # ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:14.5522559Z >>> local_optim = torch.optim.SGD(params=model.parameters(), lr=0.01) 2025-03-17T18:45:14.5522691Z >>> opt = PostLocalSGDOptimizer( 2025-03-17T18:45:14.5522816Z >>> optim=local_optim, 2025-03-17T18:45:14.5523254Z >>> averager=averagers.PeriodicModelAverager(period=4, warmup_steps=100) 2025-03-17T18:45:14.5523397Z >>> ) 2025-03-17T18:45:14.5523559Z >>> 2025-03-17T18:45:14.5523980Z >>> # In the first 100 steps, DDP runs global gradient averaging at every step. 2025-03-17T18:45:14.5524545Z >>> # After 100 steps, DDP runs gradient averaging within each subgroup (intra-node by default), 2025-03-17T18:45:14.5525270Z >>> # and post-localSGD optimizer runs global model averaging every 4 steps after applying the local optimizer. 2025-03-17T18:45:14.5525465Z >>> for step in range(0, 200): 2025-03-17T18:45:14.5525656Z >>> opt.zero_grad() 2025-03-17T18:45:14.5525864Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:14.5526056Z >>> loss.backward() 2025-03-17T18:45:14.5526205Z >>> opt.step() 2025-03-17T18:45:14.5526214Z 2025-03-17T18:45:14.5526717Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5526729Z 2025-03-17T18:45:14.5628611Z msg = Cannot scrape callname=ZeroRedundancyOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/zero_redundancy_optimizer.py line=284. 2025-03-17T18:45:14.5629148Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5629158Z 2025-03-17T18:45:14.5630147Z Wrap an arbitrary :class:`optim.Optimizer ` and shards its states across ranks in the group. 2025-03-17T18:45:14.5630176Z 2025-03-17T18:45:14.5630421Z The sharing is done as described by ZeRO_. 2025-03-17T18:45:14.5630430Z 2025-03-17T18:45:14.5630710Z The local optimizer instance in each rank is only 2025-03-17T18:45:14.5631184Z responsible for updating approximately ``1 / world_size`` parameters and 2025-03-17T18:45:14.5631580Z hence only needs to keep ``1 / world_size`` optimizer states. After 2025-03-17T18:45:14.5632048Z parameters are updated locally, each rank will broadcast its parameters to 2025-03-17T18:45:14.5632410Z all other peers to keep all model replicas in the same state. 2025-03-17T18:45:14.5632779Z ``ZeroRedundancyOptimizer`` can be used in conjunction with 2025-03-17T18:45:14.5633285Z :class:`torch.nn.parallel.DistributedDataParallel` to reduce per-rank peak 2025-03-17T18:45:14.5633466Z memory consumption. 2025-03-17T18:45:14.5633475Z 2025-03-17T18:45:14.5634032Z ``ZeroRedundancyOptimizer`` uses a sorted-greedy algorithm to pack a number 2025-03-17T18:45:14.5634477Z of parameters at each rank. Each parameter belongs to a single rank and is 2025-03-17T18:45:14.5634951Z not divided among ranks. The partition is arbitrary and might not match the 2025-03-17T18:45:14.5635199Z the parameter registration or usage order. 2025-03-17T18:45:14.5635207Z 2025-03-17T18:45:14.5635379Z Arguments: 2025-03-17T18:45:14.5635742Z params (``Iterable``): an ``Iterable`` of :class:`torch.Tensor` s 2025-03-17T18:45:14.5636107Z or :class:`dict` s giving all parameters, which will be sharded 2025-03-17T18:45:14.5636357Z across ranks. 2025-03-17T18:45:14.5636365Z 2025-03-17T18:45:14.5636543Z Keyword Args: 2025-03-17T18:45:14.5637194Z optimizer_class (:class:`torch.nn.Optimizer`): the class of the local 2025-03-17T18:45:14.5637371Z optimizer. 2025-03-17T18:45:14.5637770Z process_group (``ProcessGroup``, optional): ``torch.distributed`` 2025-03-17T18:45:14.5638172Z ``ProcessGroup`` (default: ``dist.group.WORLD`` initialized by 2025-03-17T18:45:14.5638445Z :meth:`torch.distributed.init_process_group`). 2025-03-17T18:45:14.5638892Z parameters_as_bucket_view (bool, optional): if ``True``, parameters are 2025-03-17T18:45:14.5639294Z packed into buckets to speed up communication, and ``param.data`` 2025-03-17T18:45:14.5639788Z fields point to bucket views at different offsets; if ``False``, 2025-03-17T18:45:14.5640171Z each individual parameter is communicated separately, and each 2025-03-17T18:45:14.5640464Z ``params.data`` stays intact (default: ``False``). 2025-03-17T18:45:14.5640830Z overlap_with_ddp (bool, optional): if ``True``, :meth:`step` is 2025-03-17T18:45:14.5641214Z overlapped with :class:`DistributedDataParallel` 's gradient 2025-03-17T18:45:14.5641619Z synchronization; this requires (1) either a functional optimizer 2025-03-17T18:45:14.5641984Z for the ``optimizer_class`` argument or one with a functional 2025-03-17T18:45:14.5642319Z equivalent and (2) registering a DDP communication hook 2025-03-17T18:45:14.5642713Z constructed from one of the functions in ``ddp_zero_hook.py``; 2025-03-17T18:45:14.5643029Z parameters are packed into buckets matching those in 2025-03-17T18:45:14.5643337Z :class:`DistributedDataParallel`, meaning that the 2025-03-17T18:45:14.5643622Z ``parameters_as_bucket_view`` argument is ignored. 2025-03-17T18:45:14.5643980Z If ``False``, :meth:`step` runs disjointly after the backward pass 2025-03-17T18:45:14.5644150Z (per normal). 2025-03-17T18:45:14.5644348Z (default: ``False``) 2025-03-17T18:45:14.5644756Z **defaults: any trailing arguments, which are forwarded to the local 2025-03-17T18:45:14.5644941Z optimizer. 2025-03-17T18:45:14.5644950Z 2025-03-17T18:45:14.5645120Z Example:: 2025-03-17T18:45:14.5645129Z 2025-03-17T18:45:14.5645321Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.5645665Z >>> import torch.nn as nn 2025-03-17T18:45:14.5646077Z >>> from torch.distributed.optim import ZeroRedundancyOptimizer 2025-03-17T18:45:14.5646468Z >>> from torch.nn.parallel import DistributedDataParallel as DDP 2025-03-17T18:45:14.5646910Z >>> model = nn.Sequential(*[nn.Linear(2000, 2000).to(rank) for _ in range(20)]) 2025-03-17T18:45:14.5647129Z >>> ddp = DDP(model, device_ids=[rank]) 2025-03-17T18:45:14.5647365Z >>> opt = ZeroRedundancyOptimizer( 2025-03-17T18:45:14.5647557Z >>> ddp.parameters(), 2025-03-17T18:45:14.5647795Z >>> optimizer_class=torch.optim.Adam, 2025-03-17T18:45:14.5647957Z >>> lr=0.01 2025-03-17T18:45:14.5648125Z >>> ) 2025-03-17T18:45:14.5648334Z >>> ddp(inputs).sum().backward() 2025-03-17T18:45:14.5648521Z >>> opt.step() 2025-03-17T18:45:14.5648529Z 2025-03-17T18:45:14.5648704Z .. warning:: 2025-03-17T18:45:14.5649113Z Currently, ``ZeroRedundancyOptimizer`` requires that all of the 2025-03-17T18:45:14.5649390Z passed-in parameters are the same dense type. 2025-03-17T18:45:14.5649399Z 2025-03-17T18:45:14.5649576Z .. warning:: 2025-03-17T18:45:14.5649975Z If you pass ``overlap_with_ddp=True``, be wary of the following: Given 2025-03-17T18:45:14.5650376Z the way that overlapping :class:`DistributedDataParallel` with 2025-03-17T18:45:14.5650826Z :class:`ZeroRedundancyOptimizer` is currently implemented, the first 2025-03-17T18:45:14.5651246Z two or three training iterations do not perform parameter updates in 2025-03-17T18:45:14.5651611Z the optimizer step, depending on if ``static_graph=False`` or 2025-03-17T18:45:14.5652057Z ``static_graph=True``, respectively. This is because it needs 2025-03-17T18:45:14.5652416Z information about the gradient bucketing strategy used by 2025-03-17T18:45:14.5652853Z :class:`DistributedDataParallel`, which is not finalized until the 2025-03-17T18:45:14.5653242Z second forward pass if ``static_graph=False`` or until the third 2025-03-17T18:45:14.5653662Z forward pass if ``static_graph=True``. To adjust for this, one option 2025-03-17T18:45:14.5653871Z is to prepend dummy inputs. 2025-03-17T18:45:14.5653879Z 2025-03-17T18:45:14.5654389Z .. warning:: ZeroRedundancyOptimizer is experimental and subject to change. 2025-03-17T18:45:14.5654453Z 2025-03-17T18:45:14.5654690Z .. _ZeRO: https://arxiv.org/abs/1910.02054 2025-03-17T18:45:14.5654698Z 2025-03-17T18:45:14.5654705Z 2025-03-17T18:45:14.5655207Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5655215Z 2025-03-17T18:45:14.5854224Z msg = Cannot scrape callname=_CustomReducer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py line=28. 2025-03-17T18:45:14.5854796Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.5854806Z 2025-03-17T18:45:14.5855260Z Custom reducer class that can be used to specify a custom operation that 2025-03-17T18:45:14.5855585Z reduces losses of multiple microbatches into one value. 2025-03-17T18:45:14.5855593Z 2025-03-17T18:45:14.5855758Z Example: 2025-03-17T18:45:14.5855927Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.5856133Z >>> sum_reducer = _CustomReducer( 2025-03-17T18:45:14.5856309Z >>> torch.tensor(0.0), 2025-03-17T18:45:14.5856475Z >>> lambda a, b: a + b 2025-03-17T18:45:14.5856629Z >>> ) 2025-03-17T18:45:14.5856638Z 2025-03-17T18:45:14.5857128Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.5857148Z 2025-03-17T18:45:14.6342165Z msg = Cannot scrape callname=async_execution in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/functions.py line=6. 2025-03-17T18:45:14.6342781Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.6342788Z 2025-03-17T18:45:14.6343053Z A decorator for a function indicating that the return value of the function 2025-03-17T18:45:14.6343473Z is guaranteed to be a :class:`~torch.futures.Future` object and this 2025-03-17T18:45:14.6343738Z function can run asynchronously on the RPC callee. More specifically, the 2025-03-17T18:45:14.6343978Z callee extracts the :class:`~torch.futures.Future` returned by the wrapped 2025-03-17T18:45:14.6344234Z function and installs subsequent processing steps as a callback to that 2025-03-17T18:45:14.6344477Z :class:`~torch.futures.Future`. The installed callback will read the value 2025-03-17T18:45:14.6344699Z from the :class:`~torch.futures.Future` when completed and send the 2025-03-17T18:45:14.6344886Z value back as the RPC response. That also means the returned 2025-03-17T18:45:14.6345138Z :class:`~torch.futures.Future` only exists on the callee side and is never 2025-03-17T18:45:14.6345370Z sent through RPC. This decorator is useful when the wrapped function's 2025-03-17T18:45:14.6345589Z (``fn``) execution needs to pause and resume due to, e.g., containing 2025-03-17T18:45:14.6345825Z :meth:`~torch.distributed.rpc.rpc_async` or waiting for other signals. 2025-03-17T18:45:14.6345830Z 2025-03-17T18:45:14.6346079Z .. note:: To enable asynchronous execution, applications must pass the 2025-03-17T18:45:14.6346319Z function object returned by this decorator to RPC APIs. If RPC detected 2025-03-17T18:45:14.6346714Z attributes installed by this decorator, it knows that this function 2025-03-17T18:45:14.6347032Z returns a ``Future`` object and will handle that accordingly. 2025-03-17T18:45:14.6347436Z However, this does not mean this decorator has to be outmost one when 2025-03-17T18:45:14.6347837Z defining a function. For example, when combined with ``@staticmethod`` 2025-03-17T18:45:14.6348332Z or ``@classmethod``, ``@rpc.functions.async_execution`` needs to be the 2025-03-17T18:45:14.6348741Z inner decorator to allow the target function be recognized as a static 2025-03-17T18:45:14.6349187Z or class function. This target function can still execute asynchronously 2025-03-17T18:45:14.6349630Z because, when accessed, the static or class method preserves attributes 2025-03-17T18:45:14.6349924Z installed by ``@rpc.functions.async_execution``. 2025-03-17T18:45:14.6349933Z 2025-03-17T18:45:14.6349941Z 2025-03-17T18:45:14.6350111Z Example:: 2025-03-17T18:45:14.6350505Z The returned :class:`~torch.futures.Future` object can come from 2025-03-17T18:45:14.6350865Z :meth:`~torch.distributed.rpc.rpc_async`, 2025-03-17T18:45:14.6351305Z :meth:`~torch.futures.Future.then`, or :class:`~torch.futures.Future` 2025-03-17T18:45:14.6351616Z constructor. The example below shows directly using the 2025-03-17T18:45:14.6351877Z :class:`~torch.futures.Future` returned by 2025-03-17T18:45:14.6352099Z :meth:`~torch.futures.Future.then`. 2025-03-17T18:45:14.6352107Z 2025-03-17T18:45:14.6352340Z >>> from torch.distributed import rpc 2025-03-17T18:45:14.6352488Z >>> 2025-03-17T18:45:14.6352713Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:14.6352867Z >>> 2025-03-17T18:45:14.6353067Z >>> # On all workers 2025-03-17T18:45:14.6353281Z >>> @rpc.functions.async_execution 2025-03-17T18:45:14.6353510Z >>> def async_add_chained(to, x, y, z): 2025-03-17T18:45:14.6353876Z >>> # This function runs on "worker1" and returns immediately when 2025-03-17T18:45:14.6354252Z >>> # the callback is installed through the `then(cb)` API. In the 2025-03-17T18:45:14.6354590Z >>> # mean time, the `rpc_async` to "worker2" can run concurrently. 2025-03-17T18:45:14.6354902Z >>> # When the return value of that `rpc_async` arrives at 2025-03-17T18:45:14.6355252Z >>> # "worker1", "worker1" will run the lambda function accordingly 2025-03-17T18:45:14.6355611Z >>> # and set the value for the previously returned `Future`, which 2025-03-17T18:45:14.6355943Z >>> # will then trigger RPC to send the result back to "worker0". 2025-03-17T18:45:14.6356269Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:14.6356598Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:14.6356776Z >>> ) 2025-03-17T18:45:14.6356927Z >>> 2025-03-17T18:45:14.6357117Z >>> # On worker0 2025-03-17T18:45:14.6357293Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.6357488Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:14.6357654Z >>> "worker1", 2025-03-17T18:45:14.6357851Z >>> async_add_chained, 2025-03-17T18:45:14.6358072Z >>> args=("worker2", torch.ones(2), 1, 1) 2025-03-17T18:45:14.6358243Z >>> ) 2025-03-17T18:45:14.6358464Z >>> print(ret) # prints tensor([3., 3.]) 2025-03-17T18:45:14.6358474Z 2025-03-17T18:45:14.6358943Z When combined with TorchScript decorators, this decorator must be the 2025-03-17T18:45:14.6359111Z outmost one. 2025-03-17T18:45:14.6359119Z 2025-03-17T18:45:14.6359329Z >>> from torch import Tensor 2025-03-17T18:45:14.6359546Z >>> from torch.futures import Future 2025-03-17T18:45:14.6359785Z >>> from torch.distributed import rpc 2025-03-17T18:45:14.6359944Z >>> 2025-03-17T18:45:14.6360184Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:14.6360338Z >>> 2025-03-17T18:45:14.6360527Z >>> # On all workers 2025-03-17T18:45:14.6360709Z >>> @torch.jit.script 2025-03-17T18:45:14.6360977Z >>> def script_add(x: Tensor, y: Tensor) -> Tensor: 2025-03-17T18:45:14.6361166Z >>> return x + y 2025-03-17T18:45:14.6361325Z >>> 2025-03-17T18:45:14.6361553Z >>> @rpc.functions.async_execution 2025-03-17T18:45:14.6361730Z >>> @torch.jit.script 2025-03-17T18:45:14.6362074Z >>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]: 2025-03-17T18:45:14.6362410Z >>> return rpc.rpc_async(to, script_add, (x, y)) 2025-03-17T18:45:14.6362564Z >>> 2025-03-17T18:45:14.6362726Z >>> # On worker0 2025-03-17T18:45:14.6362923Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:14.6363091Z >>> "worker1", 2025-03-17T18:45:14.6363268Z >>> async_add, 2025-03-17T18:45:14.6363485Z >>> args=("worker2", torch.ones(2), 1) 2025-03-17T18:45:14.6363661Z >>> ) 2025-03-17T18:45:14.6363856Z >>> print(ret) # prints tensor([2., 2.]) 2025-03-17T18:45:14.6363865Z 2025-03-17T18:45:14.6364278Z When combined with static or class method, this decorator must be the 2025-03-17T18:45:14.6364427Z inner one. 2025-03-17T18:45:14.6364499Z 2025-03-17T18:45:14.6364727Z >>> from torch.distributed import rpc 2025-03-17T18:45:14.6364874Z >>> 2025-03-17T18:45:14.6365093Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:14.6365234Z >>> 2025-03-17T18:45:14.6365412Z >>> # On all workers 2025-03-17T18:45:14.6365624Z >>> class AsyncExecutionClass: 2025-03-17T18:45:14.6365788Z >>> 2025-03-17T18:45:14.6365959Z >>> @staticmethod 2025-03-17T18:45:14.6366186Z >>> @rpc.functions.async_execution 2025-03-17T18:45:14.6366394Z >>> def static_async_add(to, x, y, z): 2025-03-17T18:45:14.6366730Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:14.6366944Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:14.6367108Z >>> ) 2025-03-17T18:45:14.6367251Z >>> 2025-03-17T18:45:14.6367428Z >>> @classmethod 2025-03-17T18:45:14.6367644Z >>> @rpc.functions.async_execution 2025-03-17T18:45:14.6367869Z >>> def class_async_add(cls, to, x, y, z): 2025-03-17T18:45:14.6368111Z >>> ret_fut = torch.futures.Future() 2025-03-17T18:45:14.6368384Z >>> rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:14.6368673Z >>> lambda fut: ret_fut.set_result(fut.wait() + z) 2025-03-17T18:45:14.6368834Z >>> ) 2025-03-17T18:45:14.6369026Z >>> return ret_fut 2025-03-17T18:45:14.6369176Z >>> 2025-03-17T18:45:14.6369405Z >>> @rpc.functions.async_execution 2025-03-17T18:45:14.6369633Z >>> def bound_async_add(self, to, x, y, z): 2025-03-17T18:45:14.6369961Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:14.6370309Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:14.6370483Z >>> ) 2025-03-17T18:45:14.6370640Z >>> 2025-03-17T18:45:14.6370816Z >>> # On worker0 2025-03-17T18:45:14.6370993Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:14.6371162Z >>> "worker1", 2025-03-17T18:45:14.6371409Z >>> AsyncExecutionClass.static_async_add, 2025-03-17T18:45:14.6371631Z >>> args=("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:14.6371777Z >>> ) 2025-03-17T18:45:14.6372002Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:14.6372145Z >>> 2025-03-17T18:45:14.6372345Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:14.6372504Z >>> "worker1", 2025-03-17T18:45:14.6372748Z >>> AsyncExecutionClass.class_async_add, 2025-03-17T18:45:14.6372960Z >>> args=("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:14.6373131Z >>> ) 2025-03-17T18:45:14.6373340Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:14.6373350Z 2025-03-17T18:45:14.6373677Z This decorator also works with RRef helpers, i.e., . 2025-03-17T18:45:14.6373938Z :meth:`torch.distributed.rpc.RRef.rpc_sync`, 2025-03-17T18:45:14.6374233Z :meth:`torch.distributed.rpc.RRef.rpc_async`, and 2025-03-17T18:45:14.6374488Z :meth:`torch.distributed.rpc.RRef.remote`. 2025-03-17T18:45:14.6374502Z 2025-03-17T18:45:14.6374733Z >>> from torch.distributed import rpc 2025-03-17T18:45:14.6374888Z >>> 2025-03-17T18:45:14.6375146Z >>> # reuse the AsyncExecutionClass class above 2025-03-17T18:45:14.6375426Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:14.6375910Z >>> ret = rref.rpc_sync().static_async_add("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:14.6376130Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:14.6376296Z >>> 2025-03-17T18:45:14.6376572Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:14.6377035Z >>> ret = rref.rpc_async().static_async_add("worker2", torch.ones(2), 1, 2).wait() 2025-03-17T18:45:14.6377255Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:14.6377407Z >>> 2025-03-17T18:45:14.6377700Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:14.6378142Z >>> ret = rref.remote().static_async_add("worker2", torch.ones(2), 1, 2).to_here() 2025-03-17T18:45:14.6378432Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:14.6378442Z 2025-03-17T18:45:14.6378947Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.6378956Z 2025-03-17T18:45:14.6405178Z msg = Cannot scrape callname=TensorPipeRpcBackendOptions.set_device_map in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/options.py line=108. 2025-03-17T18:45:14.6405524Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.6405530Z 2025-03-17T18:45:14.6405745Z Set device mapping between each RPC caller and callee pair. This 2025-03-17T18:45:14.6405959Z function can be called multiple times to incrementally add 2025-03-17T18:45:14.6406084Z device placement configurations. 2025-03-17T18:45:14.6406088Z 2025-03-17T18:45:14.6406189Z Args: 2025-03-17T18:45:14.6406295Z to (str): Callee name. 2025-03-17T18:45:14.6406516Z device_map (Dict of int, str, or torch.device): Device placement 2025-03-17T18:45:14.6406705Z mappings from this worker to the callee. This map must be 2025-03-17T18:45:14.6406811Z invertible. 2025-03-17T18:45:14.6406816Z 2025-03-17T18:45:14.6406907Z Example: 2025-03-17T18:45:14.6407038Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.6407135Z >>> # both workers 2025-03-17T18:45:14.6407244Z >>> def add(x, y): 2025-03-17T18:45:14.6407385Z >>> print(x) # tensor([1., 1.], device='cuda:1') 2025-03-17T18:45:14.6407509Z >>> return x + y, (x + y).to(2) 2025-03-17T18:45:14.6407598Z >>> 2025-03-17T18:45:14.6407705Z >>> # on worker 0 2025-03-17T18:45:14.6407967Z >>> options = TensorPipeRpcBackendOptions( 2025-03-17T18:45:14.6408100Z >>> num_worker_threads=8, 2025-03-17T18:45:14.6408215Z >>> device_maps={"worker1": {0: 1}} 2025-03-17T18:45:14.6408362Z >>> # maps worker0's cuda:0 to worker1's cuda:1 2025-03-17T18:45:14.6408453Z >>> ) 2025-03-17T18:45:14.6408599Z >>> options.set_device_map("worker1", {1: 2}) 2025-03-17T18:45:14.6408731Z >>> # maps worker0's cuda:1 to worker1's cuda:2 2025-03-17T18:45:14.6408833Z >>> 2025-03-17T18:45:14.6408932Z >>> rpc.init_rpc( 2025-03-17T18:45:14.6409038Z >>> "worker0", 2025-03-17T18:45:14.6409132Z >>> rank=0, 2025-03-17T18:45:14.6409230Z >>> world_size=2, 2025-03-17T18:45:14.6409375Z >>> backend=rpc.BackendType.TENSORPIPE, 2025-03-17T18:45:14.6409490Z >>> rpc_backend_options=options 2025-03-17T18:45:14.6409591Z >>> ) 2025-03-17T18:45:14.6409678Z >>> 2025-03-17T18:45:14.6409791Z >>> x = torch.ones(2) 2025-03-17T18:45:14.6409968Z >>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1)) 2025-03-17T18:45:14.6410282Z >>> # The first argument will be moved to cuda:1 on worker1. When 2025-03-17T18:45:14.6410603Z >>> # sending the return value back, it will follow the invert of 2025-03-17T18:45:14.6410930Z >>> # the device map, and hence will be moved back to cuda:0 and 2025-03-17T18:45:14.6411104Z >>> # cuda:1 on worker0 2025-03-17T18:45:14.6411386Z >>> print(rets[0]) # tensor([2., 2.], device='cuda:0') 2025-03-17T18:45:14.6411638Z >>> print(rets[1]) # tensor([2., 2.], device='cuda:1') 2025-03-17T18:45:14.6411647Z 2025-03-17T18:45:14.6412203Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.6412212Z 2025-03-17T18:45:14.6438683Z msg = Cannot scrape callname=_server_process_global_profile in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/server_process_global_profiler.py line=19. 2025-03-17T18:45:14.6439232Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.6439243Z 2025-03-17T18:45:14.6439642Z It has the same API as ``torch.autograd.profiler.profile`` class, 2025-03-17T18:45:14.6440153Z except that it enables profiling on all threads running RPC server request callbacks. 2025-03-17T18:45:14.6440294Z 2025-03-17T18:45:14.6440853Z Context manager that manages autograd profiler state and holds a summary of results. 2025-03-17T18:45:14.6441266Z Under the hood it just records events of functions being executed in C++ and 2025-03-17T18:45:14.6441559Z exposes those events to Python. You can wrap any code into it and it will 2025-03-17T18:45:14.6441727Z only report runtime of PyTorch functions. 2025-03-17T18:45:14.6442017Z Note: profiler is thread local and is automatically propagated into the async tasks 2025-03-17T18:45:14.6442022Z 2025-03-17T18:45:14.6442113Z Args: 2025-03-17T18:45:14.6442403Z enabled (bool, optional): Setting this to False makes this context manager a no-op. 2025-03-17T18:45:14.6442514Z Default: ``True``. 2025-03-17T18:45:14.6442531Z 2025-03-17T18:45:14.6442822Z use_cuda (bool, optional): Enables timing of CUDA events as well using the cudaEvent API. 2025-03-17T18:45:14.6443039Z Adds approximately 4us of overhead to each tensor operation. 2025-03-17T18:45:14.6443144Z Default: ``False`` 2025-03-17T18:45:14.6443148Z 2025-03-17T18:45:14.6443392Z record_shapes (bool, optional): If shapes recording is set, information 2025-03-17T18:45:14.6443627Z about input dimensions will be collected. This allows one to see which 2025-03-17T18:45:14.6443859Z dimensions have been used under the hood and further group by them 2025-03-17T18:45:14.6444083Z using prof.key_averages(group_by_input_shape=True). Please note that 2025-03-17T18:45:14.6444331Z shape recording might skew your profiling data. It is recommended to 2025-03-17T18:45:14.6444574Z use separate runs with and without shape recording to validate the timing. 2025-03-17T18:45:14.6444936Z Most likely the skew will be negligible for bottom most events (in a case 2025-03-17T18:45:14.6445161Z of nested function calls). But for higher level functions the total 2025-03-17T18:45:14.6445387Z self cpu time might be artificially increased because of the shape 2025-03-17T18:45:14.6445506Z collection. 2025-03-17T18:45:14.6445510Z 2025-03-17T18:45:14.6445803Z profile_memory (bool, optional): Whether to report memory usage, default: ``False`` 2025-03-17T18:45:14.6445807Z 2025-03-17T18:45:14.6445902Z .. warning: 2025-03-17T18:45:14.6446128Z Enabling memory profiling incurs additional profiler overhead 2025-03-17T18:45:14.6446135Z 2025-03-17T18:45:14.6446229Z .. warning: 2025-03-17T18:45:14.6446500Z Due to some CUDA multiprocessing limitations (multiprocessing-cuda-note_), 2025-03-17T18:45:14.6446703Z one cannot use the profiler with ``use_cuda = True`` to benchmark 2025-03-17T18:45:14.6446972Z DataLoaders with ``num_workers > 0``. If you wish to benchmark data loading, 2025-03-17T18:45:14.6447232Z please use ``use_cuda = False`` or ``num_workers = 0``. 2025-03-17T18:45:14.6447240Z 2025-03-17T18:45:14.6447391Z Example: 2025-03-17T18:45:14.6447555Z >>> # xdoctest: +SKIP 2025-03-17T18:45:14.6447729Z >>> # On worker 0: 2025-03-17T18:45:14.6447896Z >>> import torch 2025-03-17T18:45:14.6448124Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.6448374Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:14.6448595Z >>> x, y = torch.tensor(1), torch.tensor(2) 2025-03-17T18:45:14.6448888Z >>> outer_profile_rref = rpc.remote( 2025-03-17T18:45:14.6449200Z ... dst_worker_name, rpc._server_process_global_profile 2025-03-17T18:45:14.6449355Z ... ) 2025-03-17T18:45:14.6449579Z >>> outer_profile_rref.rpc_sync().__enter__() 2025-03-17T18:45:14.6449842Z >>> rpc.rpc_sync(dst_worker_name, torch.add, (x, y)) 2025-03-17T18:45:14.6450066Z >>> inner_profile_rref = rpc.remote( 2025-03-17T18:45:14.6450360Z ... dst_worker_name, rpc._server_process_global_profile 2025-03-17T18:45:14.6450520Z ... ) 2025-03-17T18:45:14.6450743Z >>> inner_profile_rref.rpc_sync().__enter__() 2025-03-17T18:45:14.6450979Z >>> rpc.rpc_sync(dst_worker_name, torch.sub, (x, y)) 2025-03-17T18:45:14.6451234Z >>> inner_profile_rref.rpc_sync().__exit__(None, None, None) 2025-03-17T18:45:14.6451412Z >>> outer_profile_rref.rpc_sync().__exit__(None, None, None) 2025-03-17T18:45:14.6451593Z >>> print(inner_profile_rref.rpc_sync().key_averages()) 2025-03-17T18:45:14.6451840Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6452173Z Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls 2025-03-17T18:45:14.6452418Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6452616Z sub 85.06% 76.275us 100.00% 89.667us 89.667us 1 2025-03-17T18:45:14.6452821Z empty 14.94% 13.392us 14.94% 13.392us 13.392us 1 2025-03-17T18:45:14.6453048Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6453179Z Self CPU time total: 89.667us 2025-03-17T18:45:14.6453348Z >>> print(outer_profile_rref.rpc_sync().key_averages()) 2025-03-17T18:45:14.6453586Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6453897Z Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls 2025-03-17T18:45:14.6454134Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6454370Z sub 35.65% 76.275us 41.91% 89.667us 89.667us 1 2025-03-17T18:45:14.6454574Z empty 12.67% 27.101us 12.67% 27.101us 13.551us 2 2025-03-17T18:45:14.6454757Z add 51.68% 110.550us 58.09% 124.259us 124.259us 1 2025-03-17T18:45:14.6454998Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:14.6455111Z Self CPU time total: 213.926us 2025-03-17T18:45:14.6455231Z >>> rpc.shutdown() 2025-03-17T18:45:14.6455239Z 2025-03-17T18:45:14.6455335Z >>> # On worker 1: 2025-03-17T18:45:14.6455479Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:14.6455624Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:14.6455800Z >>> # wait for worker 0 to finish work, and then shutdown. 2025-03-17T18:45:14.6455900Z >>> rpc.shutdown() 2025-03-17T18:45:14.6455905Z 2025-03-17T18:45:14.6456185Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.6456190Z 2025-03-17T18:45:14.7873494Z msg = Cannot scrape callname=local_map in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/experimental/_func_map.py line=33. 2025-03-17T18:45:14.7874584Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.7874988Z 2025-03-17T18:45:14.7875264Z :meth:`local_map` is an experimental API that allows users to pass :class:`DTensor` s 2025-03-17T18:45:14.7875952Z to a function that is written to be applied on ``torch.Tensor`` s. It is done by extracting 2025-03-17T18:45:14.7876812Z the local components of :class:`DTensor`, call the function, and wrap the outputs to 2025-03-17T18:45:14.7877382Z :class:`DTensor` according to the ``out_placements``. 2025-03-17T18:45:14.7877675Z 2025-03-17T18:45:14.7877767Z Args: 2025-03-17T18:45:14.7878127Z func (Callable): the function to be applied on each local shard of 2025-03-17T18:45:14.7878576Z :class:`DTensor` s. 2025-03-17T18:45:14.7879008Z out_placements (Union[`PlacementType`, Tuple[`PlacementType`, ...]]): 2025-03-17T18:45:14.7879644Z the desired placements of the :class:`DTensor` s in ``func``'s flattened output. 2025-03-17T18:45:14.7880348Z If the flattened ``output`` is a single value, the ``out_placements`` should be 2025-03-17T18:45:14.7880980Z of type `PlacementType`. Otherwise if the flattened ``output`` has multiple 2025-03-17T18:45:14.7881619Z values, the ``out_placements`` should be a tuple of `PlacementType` values 1:1 2025-03-17T18:45:14.7882140Z mapping to the flattened ``output``. 2025-03-17T18:45:14.7882601Z Besides, for :class:`Tensor` output, we use `PlacementType` as its 2025-03-17T18:45:14.7883230Z placements (a `Tuple[Placement]` value). For non-Tensor output, the `PlacementType` 2025-03-17T18:45:14.7883752Z should be `None`. 2025-03-17T18:45:14.7884192Z Note that the only exception is when no :class:`DTensor` argument is passed 2025-03-17T18:45:14.7884798Z in. In this case, even if `out_placements` is not `None`, the result function 2025-03-17T18:45:14.7885421Z should ignore the desired placements because the function is not running with 2025-03-17T18:45:14.7885924Z :class:`DTensor` s. 2025-03-17T18:45:14.7886286Z in_placements (Tuple[`PlacementType`, ...], optional): 2025-03-17T18:45:14.7886867Z the required placements of the :class:`DTensor` s in the flattened inputs of ``func``. 2025-03-17T18:45:14.7887523Z If ``in_placements`` is specified, :meth:`local_map` would examine whether the 2025-03-17T18:45:14.7888127Z placements of each :class:`DTensor` argument is the same as the required 2025-03-17T18:45:14.7888679Z placements or not. If the placements are not the same and 2025-03-17T18:45:14.7889327Z ``redistribute_inputs`` is ``False``, an exception will be raised. Otherwise if 2025-03-17T18:45:14.7889968Z ``redistribute_inputs`` is ``True``, the argument will be first redistributed to 2025-03-17T18:45:14.7890621Z the required sharding placements before passing its local tensor to ``func``. 2025-03-17T18:45:14.7891415Z The only exception is when required placements are not ``None`` and the 2025-03-17T18:45:14.7892167Z argument is a :class:`torch.Tensor`. In this case, the placements examination 2025-03-17T18:45:14.7892985Z will be skipped and the argument will be directly passed to ``func``. 2025-03-17T18:45:14.7894009Z If ``in_placements`` is ``None``, no placements examination will be performed. 2025-03-17T18:45:14.7894488Z Default: None 2025-03-17T18:45:14.7894798Z device_mesh (:class:`DeviceMesh`, optional): 2025-03-17T18:45:14.7895286Z the device mesh that all the :class:`DTensor` s are placed on. If not 2025-03-17T18:45:14.7895883Z specified, this will be inferred from the input :class:`DTensor` s' device 2025-03-17T18:45:14.7896498Z mesh. `local_map` requires every :class:`DTensor` s to be placed on the same 2025-03-17T18:45:14.7896988Z device mesh. Default: None. 2025-03-17T18:45:14.7897334Z redistribute_inputs (bool, optional): 2025-03-17T18:45:14.7897839Z the bool value indicating whether to reshard the input :class:`DTensor` s when 2025-03-17T18:45:14.7898486Z their placements are different from the required input placements. If this 2025-03-17T18:45:14.7899105Z value is ``False`` and some :class:`DTensor` input has a different placement, 2025-03-17T18:45:14.7899684Z an exception will be raised. Default: False. 2025-03-17T18:45:14.7899940Z 2025-03-17T18:45:14.7900048Z Returns: 2025-03-17T18:45:14.7900452Z A ``Callable`` that applies ``func`` to each local shard of the input :class:`DTensor` 2025-03-17T18:45:14.7901102Z and returns a :class:`DTensor` constructed from the return value of ``func``. 2025-03-17T18:45:14.7901471Z 2025-03-17T18:45:14.7901582Z Raises: 2025-03-17T18:45:14.7901972Z AssertionError: If the input :class:`DTensor` is not placed on the same device 2025-03-17T18:45:14.7902616Z mesh, or if they are placed on a different device mesh than the ``device_mesh`` 2025-03-17T18:45:14.7903143Z argument passed in. 2025-03-17T18:45:14.7903329Z 2025-03-17T18:45:14.7903594Z AssertionError: For any non-DTensor output, we require its corresponding 2025-03-17T18:45:14.7904234Z output placement in ``out_placements`` be None. An AssertionError will be raised 2025-03-17T18:45:14.7904741Z if this is not the case. 2025-03-17T18:45:14.7904970Z 2025-03-17T18:45:14.7905256Z ValueError: If ``redistribute_inputs=False`` but the input :class:`DTensor` needs 2025-03-17T18:45:14.7905813Z a redistribution according to ``in_placements``. 2025-03-17T18:45:14.7906085Z 2025-03-17T18:45:14.7906190Z Example: 2025-03-17T18:45:14.7906534Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.7906933Z >>> def mm_allreduce_forward(device_mesh, W, X): 2025-03-17T18:45:14.7907332Z >>> partial_sum_tensor = torch.mm(W, X) 2025-03-17T18:45:14.7907832Z >>> reduced_tensor = funcol.all_reduce(partial_sum_tensor, "sum", device_mesh) 2025-03-17T18:45:14.7908328Z >>> return reduced_tensor 2025-03-17T18:45:14.7908621Z >>> 2025-03-17T18:45:14.7908887Z >>> W = torch.randn(12, 8, requires_grad=False) 2025-03-17T18:45:14.7909273Z >>> X = torch.randn(8, 16, requires_grad=False) 2025-03-17T18:45:14.7909632Z >>> Y = torch.mm(W, X) 2025-03-17T18:45:14.7910012Z >>> row_wise = [Shard(0)] # row-wise sharding placements on 1-d mesh 2025-03-17T18:45:14.7910527Z >>> col_wise = [Shard(1)] # col-wise sharding placements on 1-d mesh 2025-03-17T18:45:14.7910934Z >>> 2025-03-17T18:45:14.7911341Z >>> # local_mm_allreduce_forward is the function wrapped with DTensor/Tensor convertion 2025-03-17T18:45:14.7911877Z >>> local_mm_allreduce_forward = local_map( 2025-03-17T18:45:14.7912308Z >>> mm_allreduce_forward, 2025-03-17T18:45:14.7912642Z >>> out_placements=[Replicate()], 2025-03-17T18:45:14.7913012Z >>> in_placements=[col_wise, row_wise], 2025-03-17T18:45:14.7913370Z >>> device_mesh=device_mesh, 2025-03-17T18:45:14.7913686Z >>> ) 2025-03-17T18:45:14.7913910Z >>> 2025-03-17T18:45:14.7914152Z >>> W_dt = distribute_tensor( 2025-03-17T18:45:14.7914471Z ... W, device_mesh, (col_wise) 2025-03-17T18:45:14.7914808Z ... ) # col-wisely sharded W tensor 2025-03-17T18:45:14.7915134Z >>> X_dt = distribute_tensor( 2025-03-17T18:45:14.7915457Z ... X, device_mesh, (row_wise) 2025-03-17T18:45:14.7915789Z ... ) # row-wisely sharded X tensor 2025-03-17T18:45:14.7916136Z >>> Y_dt = local_mm_allreduce_forward( 2025-03-17T18:45:14.7916479Z ... device_mesh, W_dt, X_dt 2025-03-17T18:45:14.7916839Z ... ) # apply local_mm_allreduce_forward to DTensors 2025-03-17T18:45:14.7917129Z 2025-03-17T18:45:14.7917350Z .. note:: This API is currently experimental and subject to change 2025-03-17T18:45:14.7917681Z 2025-03-17T18:45:14.7917942Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.7918336Z 2025-03-17T18:45:14.7919083Z msg = Cannot scrape callname=register_sharding in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/experimental/_register_sharding.py line=26. 2025-03-17T18:45:14.7920213Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.7920604Z 2025-03-17T18:45:14.7920942Z :meth:`register_sharding` is an experimental API that allows users to register sharding 2025-03-17T18:45:14.7921605Z strategies for an operator when the tensor inputs and outputs are DTensor. 2025-03-17T18:45:14.7922244Z It can be useful when: (1) there doesn't exist a default sharding strategy for ``op``, 2025-03-17T18:45:14.7922877Z e.g. when ``op`` is a custom operator that is not supported by :class:`DTensor`; (2) 2025-03-17T18:45:14.7923545Z when users would like to overwrite default sharding strategies of existing operators. 2025-03-17T18:45:14.7923967Z 2025-03-17T18:45:14.7924058Z Args: 2025-03-17T18:45:14.7924328Z op (Union[OpOverload, List[OpOverload]]): 2025-03-17T18:45:14.7924817Z An op or a list of ops to register the customized sharding function. 2025-03-17T18:45:14.7925156Z 2025-03-17T18:45:14.7925250Z Returns: 2025-03-17T18:45:14.7925667Z A function decorator which can be used to wrap a function that defines the sharding 2025-03-17T18:45:14.7926352Z strategy for the operator specified in ``op``. The defined sharding strategy will be 2025-03-17T18:45:14.7927051Z registered to DTensor and will override the default sharding strategy if DTensor has 2025-03-17T18:45:14.7927779Z already implemented the operator. The customized sharding function takes the same inputs 2025-03-17T18:45:14.7928478Z as the original op (except that if an arg is a :class:`torch.Tensor`, it will be 2025-03-17T18:45:14.7929139Z replaced by a tensor-like object that DTensor uses internally). The function should 2025-03-17T18:45:14.7929832Z return a sequence of 2-tuples, each specifying acceptable output placements and its 2025-03-17T18:45:14.7930366Z corresponding intput placements. 2025-03-17T18:45:14.7930603Z 2025-03-17T18:45:14.7930695Z Example: 2025-03-17T18:45:14.7930942Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:14.7931314Z >>> @register_sharding(aten._softmax.default) 2025-03-17T18:45:14.7931733Z >>> def custom_softmax_sharding(x, dim, half_to_float): 2025-03-17T18:45:14.7932170Z >>> softmax_dim = dim if dim >= 0 else dim + x.ndim 2025-03-17T18:45:14.7932556Z >>> acceptable_shardings = [] 2025-03-17T18:45:14.7932870Z >>> 2025-03-17T18:45:14.7933180Z >>> all_replicate = ([Replicate()], [Replicate(), None, None]) 2025-03-17T18:45:14.7933647Z >>> acceptable_shardings.append(all_replicate) 2025-03-17T18:45:14.7934007Z >>> 2025-03-17T18:45:14.7934311Z >>> for sharding_dim in range(x.ndim): 2025-03-17T18:45:14.7934681Z >>> if sharding_dim != softmax_dim: 2025-03-17T18:45:14.7935031Z >>> all_sharded = ( 2025-03-17T18:45:14.7935361Z >>> [Shard(sharding_dim)], 2025-03-17T18:45:14.7935729Z >>> [Shard(sharding_dim), None, None], 2025-03-17T18:45:14.7936078Z >>> ) 2025-03-17T18:45:14.7936391Z >>> acceptable_shardings.append(all_sharded) 2025-03-17T18:45:14.7936938Z >>> 2025-03-17T18:45:14.7937205Z >>> return acceptable_shardings 2025-03-17T18:45:14.7937444Z 2025-03-17T18:45:14.7937651Z .. note:: This API is currently experimental and subject to change 2025-03-17T18:45:14.7937981Z 2025-03-17T18:45:14.7938241Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.7938634Z 2025-03-17T18:45:14.8164551Z msg = Cannot scrape callname=PrepareModuleInput in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py line=403. 2025-03-17T18:45:14.8165710Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8166179Z 2025-03-17T18:45:14.8166571Z Configure the nn.Module's inputs to convert the input tensors of the nn.Module to DTensors at runtime according to 2025-03-17T18:45:14.8167543Z ``input_layouts``, and perform layout redistribution according to the ``desired_input_layouts``. 2025-03-17T18:45:14.8168009Z 2025-03-17T18:45:14.8168107Z Keyword Args: 2025-03-17T18:45:14.8168520Z input_layouts (Union[Placement, Tuple[Optional[Placement]]]): 2025-03-17T18:45:14.8169489Z The DTensor layouts of input tensors for the nn.Module, this is used to convert the input tensors to 2025-03-17T18:45:14.8170378Z DTensors. If some inputs are not torch.Tensor or no need to convert to DTensors, ``None`` need to be specified 2025-03-17T18:45:14.8171007Z as a placeholder. default: None. 2025-03-17T18:45:14.8171496Z desired_input_layouts (Union[Placement, Tuple[Optional[Placement]]]): 2025-03-17T18:45:14.8172249Z The desired DTensor layout of input tensors for the nn.Module, this is used to ensure the inputs of the nn.Module 2025-03-17T18:45:14.8173231Z have the desired DTensor layouts. This argument needs to have the same length with ``input_layouts``. default: None. 2025-03-17T18:45:14.8173903Z input_kwarg_layouts (Dict[str, Placement]): 2025-03-17T18:45:14.8174545Z The DTensor layouts of input kwargs for the nn.Module, this is used to convert the input kwarg tensors to DTensors. 2025-03-17T18:45:14.8175162Z default: None 2025-03-17T18:45:14.8175494Z desired_input_kwarg_layouts: (Dict[str, Placement]): 2025-03-17T18:45:14.8176160Z The desired DTensor layout of input kwargs for the nn.Module, this is used to ensure the inputs of the nn.Module 2025-03-17T18:45:14.8177150Z have the desired DTensor layouts. default: None. 2025-03-17T18:45:14.8177687Z use_local_output (bool, optional): 2025-03-17T18:45:14.8178289Z Whether to use local :class:`torch.Tensor` instead of :class:`DTensor` for the module inputs, default: False. 2025-03-17T18:45:14.8178878Z Returns: 2025-03-17T18:45:14.8179338Z A :class:`ParallelStyle` object that prepares the sharding layouts of the nn.Module's inputs. 2025-03-17T18:45:14.8179789Z 2025-03-17T18:45:14.8179907Z Example:: 2025-03-17T18:45:14.8180158Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:14.8180700Z >>> from torch.distributed.tensor.parallel import parallelize_module, PrepareModuleInput 2025-03-17T18:45:14.8181356Z >>> from torch.distributed.device_mesh import init_device_mesh 2025-03-17T18:45:14.8181772Z >>> ... 2025-03-17T18:45:14.8182222Z >>> block = TransformerBlock(...) # block is a nn.Module that contains an "attn" Attention submodule 2025-03-17T18:45:14.8182794Z >>> tp_mesh = init_device_mesh("cuda", (8,)) 2025-03-17T18:45:14.8183137Z >>> 2025-03-17T18:45:14.8183768Z >>> # According to the style specified below, the first input of attn will be annotated to Sharded DTensor 2025-03-17T18:45:14.8184431Z >>> # and then redistributed to Replicated DTensor. 2025-03-17T18:45:14.8184813Z >>> parallelize_module( 2025-03-17T18:45:14.8185135Z >>> block, # this can be a submodule or module 2025-03-17T18:45:14.8185494Z >>> tp_mesh, 2025-03-17T18:45:14.8185771Z >>> parallelize_plan={ 2025-03-17T18:45:14.8186102Z >>> "attn": PrepareModuleInput( 2025-03-17T18:45:14.8186579Z >>> input_layouts=(Shard(0), None, None, ...), 2025-03-17T18:45:14.8187032Z >>> desired_input_layouts=(Replicate(), None, None, ...) 2025-03-17T18:45:14.8187429Z >>> ), 2025-03-17T18:45:14.8187730Z >>> } 2025-03-17T18:45:14.8187967Z >>> ) 2025-03-17T18:45:14.8188107Z 2025-03-17T18:45:14.8188372Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8188765Z 2025-03-17T18:45:14.8189452Z msg = Cannot scrape callname=PrepareModuleOutput in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py line=562. 2025-03-17T18:45:14.8190516Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8190924Z 2025-03-17T18:45:14.8191326Z Configure the nn.Module's outputs to convert the output tensors of the nn.Module to DTensors at runtime according to 2025-03-17T18:45:14.8192198Z ``output_layouts``, and perform layout redistribution according to the ``desired_output_layouts``. 2025-03-17T18:45:14.8192707Z 2025-03-17T18:45:14.8192806Z Keyword Args: 2025-03-17T18:45:14.8193132Z output_layouts (Union[Placement, Tuple[Placement]]): 2025-03-17T18:45:14.8193773Z The DTensor layouts of output tensors for the nn.Module, this is used to convert the output tensors to 2025-03-17T18:45:14.8194641Z DTensors if they are :class:`torch.Tensor`. If some outputs are not torch.Tensor or no need to convert to DTensors, 2025-03-17T18:45:14.8195306Z ``None`` need to be specified as a placeholder. 2025-03-17T18:45:14.8195779Z desired_output_layouts (Union[Placement, Tuple[Placement]]): 2025-03-17T18:45:14.8196505Z The desired DTensor layouts of output tensors for the nn.Module, this is used to ensure the outputs of the nn.Module 2025-03-17T18:45:14.8197207Z have the desired DTensor layouts. 2025-03-17T18:45:14.8197568Z use_local_output (bool, optional): 2025-03-17T18:45:14.8198169Z Whether to use local :class:`torch.Tensor` instead of :class:`DTensor` for the module outputs, default: True. 2025-03-17T18:45:14.8198766Z Returns: 2025-03-17T18:45:14.8199205Z A ParallelStyle object that prepares the sharding layouts of the nn.Module's outputs. 2025-03-17T18:45:14.8199626Z 2025-03-17T18:45:14.8199737Z Example:: 2025-03-17T18:45:14.8199983Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:14.8200530Z >>> from torch.distributed.tensor.parallel import parallelize_module, PrepareModuleOutput 2025-03-17T18:45:14.8201181Z >>> from torch.distributed.device_mesh import init_device_mesh 2025-03-17T18:45:14.8201580Z >>> ... 2025-03-17T18:45:14.8202028Z >>> block = TransformerBlock(...) # block is a nn.Module that contains an "attn" Attention submodule 2025-03-17T18:45:14.8202601Z >>> tp_mesh = init_device_mesh("cuda", (8,)) 2025-03-17T18:45:14.8202951Z >>> 2025-03-17T18:45:14.8203490Z >>> # According to the style specified below, the output of the TransformerBlock will be converted to Replicated DTensor 2025-03-17T18:45:14.8204168Z >>> # and then redistributed to Sharded DTensor. 2025-03-17T18:45:14.8204542Z >>> parallelize_module( 2025-03-17T18:45:14.8204875Z >>> block, # this can be a submodule or module 2025-03-17T18:45:14.8205231Z >>> tp_mesh, 2025-03-17T18:45:14.8205534Z >>> parallelize_plan = PrepareModuleOutput( 2025-03-17T18:45:14.8205919Z >>> output_layouts=Replicate(), 2025-03-17T18:45:14.8206339Z >>> desired_output_layouts=Shard(0) 2025-03-17T18:45:14.8206674Z >>> ) 2025-03-17T18:45:14.8206904Z >>> ) 2025-03-17T18:45:14.8207028Z 2025-03-17T18:45:14.8207308Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8207689Z 2025-03-17T18:45:14.8744731Z msg = Cannot scrape callname=LowRankMultivariateNormal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/lowrank_multivariate_normal.py line=55. 2025-03-17T18:45:14.8746030Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8746518Z 2025-03-17T18:45:14.8746906Z Creates a multivariate normal distribution with covariance matrix having a low-rank form 2025-03-17T18:45:14.8747667Z parameterized by :attr:`cov_factor` and :attr:`cov_diag`:: 2025-03-17T18:45:14.8747972Z 2025-03-17T18:45:14.8748168Z covariance_matrix = cov_factor @ cov_factor.T + cov_diag 2025-03-17T18:45:14.8748465Z 2025-03-17T18:45:14.8748589Z Example: 2025-03-17T18:45:14.8748884Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_LAPACK) 2025-03-17T18:45:14.8749310Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:14.8749752Z >>> m = LowRankMultivariateNormal( 2025-03-17T18:45:14.8750186Z ... torch.zeros(2), torch.tensor([[1.0], [0.0]]), torch.ones(2) 2025-03-17T18:45:14.8750656Z ... ) 2025-03-17T18:45:14.8751103Z >>> m.sample() # normally distributed with mean=`[0,0]`, cov_factor=`[[1],[0]]`, cov_diag=`[1,1]` 2025-03-17T18:45:14.8751685Z tensor([-0.2102, -0.5429]) 2025-03-17T18:45:14.8752118Z 2025-03-17T18:45:14.8752215Z Args: 2025-03-17T18:45:14.8752598Z loc (Tensor): mean of the distribution with shape `batch_shape + event_shape` 2025-03-17T18:45:14.8753301Z cov_factor (Tensor): factor part of low-rank form of covariance matrix with shape 2025-03-17T18:45:14.8753890Z `batch_shape + event_shape + (rank,)` 2025-03-17T18:45:14.8754474Z cov_diag (Tensor): diagonal part of low-rank form of covariance matrix with shape 2025-03-17T18:45:14.8755029Z `batch_shape + event_shape` 2025-03-17T18:45:14.8755270Z 2025-03-17T18:45:14.8755360Z Note: 2025-03-17T18:45:14.8755827Z The computation for determinant and inverse of covariance matrix is avoided when 2025-03-17T18:45:14.8756618Z `cov_factor.shape[1] << cov_factor.shape[0]` thanks to `Woodbury matrix identity 2025-03-17T18:45:14.8757225Z `_ and 2025-03-17T18:45:14.8757935Z `matrix determinant lemma `_. 2025-03-17T18:45:14.8758731Z Thanks to these formulas, we just need to compute the determinant and inverse of 2025-03-17T18:45:14.8759258Z the small size "capacitance" matrix:: 2025-03-17T18:45:14.8759504Z 2025-03-17T18:45:14.8759695Z capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor 2025-03-17T18:45:14.8760015Z 2025-03-17T18:45:14.8760299Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8760718Z 2025-03-17T18:45:14.8764141Z msg = Cannot scrape callname=MixtureSameFamily in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/mixture_same_family.py line=13. 2025-03-17T18:45:14.8765234Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8765657Z 2025-03-17T18:45:14.8765893Z The `MixtureSameFamily` distribution implements a (batch of) mixture 2025-03-17T18:45:14.8766503Z distribution where all component are from different parameterizations of 2025-03-17T18:45:14.8767114Z the same distribution type. It is parameterized by a `Categorical` 2025-03-17T18:45:14.8767667Z "selecting distribution" (over `k` component) and a component 2025-03-17T18:45:14.8768339Z distribution, i.e., a `Distribution` with a rightmost batch shape 2025-03-17T18:45:14.8768848Z (equal to `[k]`) which indexes each (batch of) component. 2025-03-17T18:45:14.8769128Z 2025-03-17T18:45:14.8769424Z Examples:: 2025-03-17T18:45:14.8769561Z 2025-03-17T18:45:14.8769703Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:14.8770159Z >>> # Construct Gaussian Mixture Model in 1D consisting of 5 equally 2025-03-17T18:45:14.8770625Z >>> # weighted normal distributions 2025-03-17T18:45:14.8770988Z >>> mix = D.Categorical(torch.ones(5,)) 2025-03-17T18:45:14.8771385Z >>> comp = D.Normal(torch.randn(5,), torch.rand(5,)) 2025-03-17T18:45:14.8771791Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:14.8772027Z 2025-03-17T18:45:14.8772247Z >>> # Construct Gaussian Mixture Model in 2D consisting of 5 equally 2025-03-17T18:45:14.8772715Z >>> # weighted bivariate normal distributions 2025-03-17T18:45:14.8773105Z >>> mix = D.Categorical(torch.ones(5,)) 2025-03-17T18:45:14.8773468Z >>> comp = D.Independent(D.Normal( 2025-03-17T18:45:14.8773832Z ... torch.randn(5,2), torch.rand(5,2)), 1) 2025-03-17T18:45:14.8774219Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:14.8774467Z 2025-03-17T18:45:14.8774652Z >>> # Construct a batch of 3 Gaussian Mixture Models in 2D each 2025-03-17T18:45:14.8775171Z >>> # consisting of 5 random weighted bivariate normal distributions 2025-03-17T18:45:14.8775626Z >>> mix = D.Categorical(torch.rand(3,5)) 2025-03-17T18:45:14.8775986Z >>> comp = D.Independent(D.Normal( 2025-03-17T18:45:14.8776350Z ... torch.randn(3,5,2), torch.rand(3,5,2)), 1) 2025-03-17T18:45:14.8776734Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:14.8776979Z 2025-03-17T18:45:14.8777068Z Args: 2025-03-17T18:45:14.8777514Z mixture_distribution: `torch.distributions.Categorical`-like 2025-03-17T18:45:14.8778046Z instance. Manages the probability of selecting component. 2025-03-17T18:45:14.8778543Z The number of categories must match the rightmost batch 2025-03-17T18:45:14.8779034Z dimension of the `component_distribution`. Must have either 2025-03-17T18:45:14.8779515Z scalar `batch_shape` or `batch_shape` matching 2025-03-17T18:45:14.8779926Z `component_distribution.batch_shape[:-1]` 2025-03-17T18:45:14.8780413Z component_distribution: `torch.distributions.Distribution`-like 2025-03-17T18:45:14.8780964Z instance. Right-most batch dimension indexes component. 2025-03-17T18:45:14.8781303Z 2025-03-17T18:45:14.8781574Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8781953Z 2025-03-17T18:45:14.8894427Z msg = Cannot scrape callname=RelaxedBernoulli in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/relaxed_bernoulli.py line=111. 2025-03-17T18:45:14.8895618Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8896013Z 2025-03-17T18:45:14.8896244Z Creates a RelaxedBernoulli distribution, parametrized by 2025-03-17T18:45:14.8896784Z :attr:`temperature`, and either :attr:`probs` or :attr:`logits` 2025-03-17T18:45:14.8897405Z (but not both). This is a relaxed version of the `Bernoulli` distribution, 2025-03-17T18:45:14.8898003Z so the values are in (0, 1), and has reparametrizable samples. 2025-03-17T18:45:14.8898310Z 2025-03-17T18:45:14.8898430Z Example:: 2025-03-17T18:45:14.8898615Z 2025-03-17T18:45:14.8898783Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:14.8899186Z >>> m = RelaxedBernoulli(torch.tensor([2.2]), 2025-03-17T18:45:14.8899632Z ... torch.tensor([0.1, 0.2, 0.3, 0.99])) 2025-03-17T18:45:14.8899985Z >>> m.sample() 2025-03-17T18:45:14.8900307Z tensor([ 0.2951, 0.3442, 0.8918, 0.9021]) 2025-03-17T18:45:14.8900558Z 2025-03-17T18:45:14.8900647Z Args: 2025-03-17T18:45:14.8900981Z temperature (Tensor): relaxation temperature 2025-03-17T18:45:14.8901424Z probs (Number, Tensor): the probability of sampling `1` 2025-03-17T18:45:14.8901950Z logits (Number, Tensor): the log-odds of sampling `1` 2025-03-17T18:45:14.8902242Z 2025-03-17T18:45:14.8902732Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8903171Z 2025-03-17T18:45:14.8915460Z msg = Cannot scrape callname=RelaxedOneHotCategorical in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/relaxed_categorical.py line=101. 2025-03-17T18:45:14.8916623Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:14.8917085Z 2025-03-17T18:45:14.8917309Z Creates a RelaxedOneHotCategorical distribution parametrized by 2025-03-17T18:45:14.8917914Z :attr:`temperature`, and either :attr:`probs` or :attr:`logits`. 2025-03-17T18:45:14.8918541Z This is a relaxed version of the :class:`OneHotCategorical` distribution, so 2025-03-17T18:45:14.8919138Z its samples are on simplex, and are reparametrizable. 2025-03-17T18:45:14.8919430Z 2025-03-17T18:45:14.8919533Z Example:: 2025-03-17T18:45:14.8919673Z 2025-03-17T18:45:14.8919869Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:14.8920303Z >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]), 2025-03-17T18:45:14.8920770Z ... torch.tensor([0.1, 0.2, 0.3, 0.4])) 2025-03-17T18:45:14.8921126Z >>> m.sample() 2025-03-17T18:45:14.8921454Z tensor([ 0.1294, 0.2324, 0.3859, 0.2523]) 2025-03-17T18:45:14.8921703Z 2025-03-17T18:45:14.8921791Z Args: 2025-03-17T18:45:14.8922093Z temperature (Tensor): relaxation temperature 2025-03-17T18:45:14.8922498Z probs (Tensor): event probabilities 2025-03-17T18:45:14.8922979Z logits (Tensor): unnormalized log probability for each event 2025-03-17T18:45:14.8923379Z 2025-03-17T18:45:14.8923709Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:14.8924088Z 2025-03-17T18:45:15.3965092Z msg = Cannot scrape callname=assoc_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=245. 2025-03-17T18:45:15.3966700Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.3967299Z Return a new dict with new, potentially nested, key value pair 2025-03-17T18:45:15.3967612Z 2025-03-17T18:45:15.3967729Z >>> purchase = { 2025-03-17T18:45:15.3968080Z ... "name": "Alice", 2025-03-17T18:45:15.3968975Z ... "order": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:15.3969498Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:15.3969832Z ... } 2025-03-17T18:45:15.3970180Z >>> assoc_in(purchase, ["order", "costs"], [0.25, 1.00]) # doctest: +SKIP 2025-03-17T18:45:15.3970637Z {'credit card': '5555-1234-1234-1234', 2025-03-17T18:45:15.3971004Z 'name': 'Alice', 2025-03-17T18:45:15.3971423Z 'order': {'costs': [0.25, 1.00], 'items': ['Apple', 'Orange']}} 2025-03-17T18:45:15.3971941Z 2025-03-17T18:45:15.3972390Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.3972772Z 2025-03-17T18:45:15.3973529Z msg = Cannot scrape callname=update_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=261. 2025-03-17T18:45:15.3974704Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.3975313Z Update value in a (potentially) nested dictionary 2025-03-17T18:45:15.3975584Z 2025-03-17T18:45:15.3975689Z inputs: 2025-03-17T18:45:15.3975993Z d - dictionary on which to operate 2025-03-17T18:45:15.3976457Z keys - list or tuple giving the location of the value to be changed in d 2025-03-17T18:45:15.3977008Z func - function to operate on that value 2025-03-17T18:45:15.3977260Z 2025-03-17T18:45:15.3977502Z If keys == [k0,..,kX] and d[k0]..[kX] == v, update_in returns a copy of the 2025-03-17T18:45:15.3978076Z original dictionary with v replaced by func(v), but does not mutate the 2025-03-17T18:45:15.3978615Z original dictionary. 2025-03-17T18:45:15.3978804Z 2025-03-17T18:45:15.3979233Z If k0 is not a key in d, update_in creates nested dictionaries to the depth 2025-03-17T18:45:15.3979869Z specified by the keys, with the innermost value set to func(default). 2025-03-17T18:45:15.3980228Z 2025-03-17T18:45:15.3980338Z >>> inc = lambda x: x + 1 2025-03-17T18:45:15.3980705Z >>> update_in({"a": 0}, ["a"], inc) 2025-03-17T18:45:15.3981030Z {'a': 1} 2025-03-17T18:45:15.3981174Z 2025-03-17T18:45:15.3981307Z >>> transaction = { 2025-03-17T18:45:15.3981603Z ... "name": "Alice", 2025-03-17T18:45:15.3981987Z ... "purchase": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:15.3982486Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:15.3982862Z ... } 2025-03-17T18:45:15.3983224Z >>> update_in(transaction, ["purchase", "costs"], sum) # doctest: +SKIP 2025-03-17T18:45:15.3983734Z {'credit card': '5555-1234-1234-1234', 2025-03-17T18:45:15.4006910Z 'name': 'Alice', 2025-03-17T18:45:15.4007439Z 'purchase': {'costs': 1.75, 'items': ['Apple', 'Orange']}} 2025-03-17T18:45:15.4007753Z 2025-03-17T18:45:15.4007889Z >>> # updating a value when k0 is not in d 2025-03-17T18:45:15.4008362Z >>> update_in({}, [1, 2, 3], str, default="bar") 2025-03-17T18:45:15.4008724Z {1: {2: {3: 'bar'}}} 2025-03-17T18:45:15.4009028Z >>> update_in({1: "foo"}, [2, 3, 4], inc, 0) 2025-03-17T18:45:15.4009376Z {1: 'foo', 2: {3: {4: 1}}} 2025-03-17T18:45:15.4009660Z 2025-03-17T18:45:15.4010055Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.4010567Z 2025-03-17T18:45:15.4011243Z msg = Cannot scrape callname=get_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=320. 2025-03-17T18:45:15.4012286Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.4012834Z Returns coll[i0][i1]...[iX] where [i0, i1, ..., iX]==keys. 2025-03-17T18:45:15.4013119Z 2025-03-17T18:45:15.4013303Z If coll[i0][i1]...[iX] cannot be found, returns ``default``, unless 2025-03-17T18:45:15.4013805Z ``no_default`` is specified, then it raises KeyError or IndexError. 2025-03-17T18:45:15.4014123Z 2025-03-17T18:45:15.4014335Z ``get_in`` is a generalization of ``operator.getitem`` for nested data 2025-03-17T18:45:15.4014829Z structures such as dictionaries and lists. 2025-03-17T18:45:15.4015144Z 2025-03-17T18:45:15.4015250Z >>> transaction = { 2025-03-17T18:45:15.4015505Z ... "name": "Alice", 2025-03-17T18:45:15.4015870Z ... "purchase": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:15.4016308Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:15.4016673Z ... } 2025-03-17T18:45:15.4016923Z >>> get_in(["purchase", "items", 0], transaction) 2025-03-17T18:45:15.4017255Z 'Apple' 2025-03-17T18:45:15.4017486Z >>> get_in(["name"], transaction) 2025-03-17T18:45:15.4017786Z 'Alice' 2025-03-17T18:45:15.4018055Z >>> get_in(["purchase", "total"], transaction) 2025-03-17T18:45:15.4018453Z >>> get_in(["purchase", "items", "apple"], transaction) 2025-03-17T18:45:15.4018852Z >>> get_in(["purchase", "items", 10], transaction) 2025-03-17T18:45:15.4019248Z >>> get_in(["purchase", "total"], transaction, 0) 2025-03-17T18:45:15.4019587Z 0 2025-03-17T18:45:15.4019809Z >>> get_in(["y"], {}, no_default=True) 2025-03-17T18:45:15.4020149Z Traceback (most recent call last): 2025-03-17T18:45:15.4020460Z ... 2025-03-17T18:45:15.4020674Z KeyError: 'y' 2025-03-17T18:45:15.4020822Z 2025-03-17T18:45:15.4020918Z See Also: 2025-03-17T18:45:15.4021145Z itertoolz.get 2025-03-17T18:45:15.4021412Z operator.getitem 2025-03-17T18:45:15.4021673Z 2025-03-17T18:45:15.4022064Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.4022453Z 2025-03-17T18:45:15.4023208Z msg = Cannot scrape callname=groupby in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=373. 2025-03-17T18:45:15.4024263Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.4024777Z Group a collection by a key function 2025-03-17T18:45:15.4025011Z 2025-03-17T18:45:15.4025184Z >>> names = ["Alice", "Bob", "Charlie", "Dan", "Edith", "Frank"] 2025-03-17T18:45:15.4025605Z >>> groupby(len, names) # doctest: +SKIP 2025-03-17T18:45:15.4026004Z {3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']} 2025-03-17T18:45:15.4026292Z 2025-03-17T18:45:15.4026398Z >>> iseven = lambda x: x % 2 == 0 2025-03-17T18:45:15.4026857Z >>> groupby(iseven, [1, 2, 3, 4, 5, 6, 7, 8]) # doctest: +SKIP 2025-03-17T18:45:15.4027250Z {False: [1, 3, 5, 7], True: [2, 4, 6, 8]} 2025-03-17T18:45:15.4027476Z 2025-03-17T18:45:15.4027616Z Non-callable keys imply grouping on a member. 2025-03-17T18:45:15.4027885Z 2025-03-17T18:45:15.4027985Z >>> groupby( 2025-03-17T18:45:15.4028236Z ... "gender", 2025-03-17T18:45:15.4028552Z ... [ 2025-03-17T18:45:15.4028821Z ... {"name": "Alice", "gender": "F"}, 2025-03-17T18:45:15.4029184Z ... {"name": "Bob", "gender": "M"}, 2025-03-17T18:45:15.4029549Z ... {"name": "Charlie", "gender": "M"}, 2025-03-17T18:45:15.4029889Z ... ], 2025-03-17T18:45:15.4030134Z ... ) # doctest:+SKIP 2025-03-17T18:45:15.4030433Z {'F': [{'gender': 'F', 'name': 'Alice'}], 2025-03-17T18:45:15.4030782Z 'M': [{'gender': 'M', 'name': 'Bob'}, 2025-03-17T18:45:15.4031170Z {'gender': 'M', 'name': 'Charlie'}]} 2025-03-17T18:45:15.4031398Z 2025-03-17T18:45:15.4031557Z Not to be confused with ``itertools.groupby`` 2025-03-17T18:45:15.4031813Z 2025-03-17T18:45:15.4031919Z See Also: 2025-03-17T18:45:15.4032141Z countby 2025-03-17T18:45:15.4032377Z 2025-03-17T18:45:15.4032770Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.4033163Z 2025-03-17T18:45:15.7861982Z msg = Cannot scrape callname=SyncBatchNorm in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py line=601. 2025-03-17T18:45:15.7863063Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.7863897Z Applies Batch Normalization over a N-Dimensional input. 2025-03-17T18:45:15.7864267Z 2025-03-17T18:45:15.7864622Z The N-D input is a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper 2025-03-17T18:45:15.7865465Z `Batch Normalization: Accelerating Deep Network Training by Reducing 2025-03-17T18:45:15.7866254Z Internal Covariate Shift `__ . 2025-03-17T18:45:15.7866749Z 2025-03-17T18:45:15.7866865Z .. math:: 2025-03-17T18:45:15.7867009Z 2025-03-17T18:45:15.7867265Z y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta 2025-03-17T18:45:15.7867682Z 2025-03-17T18:45:15.7867920Z The mean and standard-deviation are calculated per-dimension over all 2025-03-17T18:45:15.7868582Z mini-batches of the same process groups. :math:`\gamma` and :math:`\beta` 2025-03-17T18:45:15.7869264Z are learnable parameter vectors of size `C` (where `C` is the input size). 2025-03-17T18:45:15.7869888Z By default, the elements of :math:`\gamma` are sampled from 2025-03-17T18:45:15.7870460Z :math:`\mathcal{U}(0, 1)` and the elements of :math:`\beta` are set to 0. 2025-03-17T18:45:15.7871070Z The standard-deviation is calculated via the biased estimator, equivalent to 2025-03-17T18:45:15.7871652Z `torch.var(input, unbiased=False)`. 2025-03-17T18:45:15.7871909Z 2025-03-17T18:45:15.7872189Z Also by default, during training this layer keeps running estimates of its 2025-03-17T18:45:15.7872858Z computed mean and variance, which are then used for normalization during 2025-03-17T18:45:15.7873694Z evaluation. The running estimates are kept with a default :attr:`momentum` 2025-03-17T18:45:15.7874197Z of 0.1. 2025-03-17T18:45:15.7874356Z 2025-03-17T18:45:15.7874606Z If :attr:`track_running_stats` is set to ``False``, this layer then does not 2025-03-17T18:45:15.7875249Z keep running estimates, and batch statistics are instead used during 2025-03-17T18:45:15.7875751Z evaluation time as well. 2025-03-17T18:45:15.7875965Z 2025-03-17T18:45:15.7876076Z .. note:: 2025-03-17T18:45:15.7876441Z This :attr:`momentum` argument is different from one used in optimizer 2025-03-17T18:45:15.7877085Z classes and the conventional notion of momentum. Mathematically, the 2025-03-17T18:45:15.7877646Z update rule for running statistics here is 2025-03-17T18:45:15.7878239Z :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`, 2025-03-17T18:45:15.7878908Z where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the 2025-03-17T18:45:15.7879364Z new observed value. 2025-03-17T18:45:15.7879599Z 2025-03-17T18:45:15.7879924Z Because the Batch Normalization is done for each channel in the ``C`` dimension, computing 2025-03-17T18:45:15.7880675Z statistics on ``(N, +)`` slices, it's common terminology to call this Volumetric Batch 2025-03-17T18:45:15.7881296Z Normalization or Spatio-temporal Batch Normalization. 2025-03-17T18:45:15.7881603Z 2025-03-17T18:45:15.7881754Z Currently :class:`SyncBatchNorm` only supports 2025-03-17T18:45:15.7882377Z :class:`~torch.nn.DistributedDataParallel` (DDP) with single GPU per process. Use 2025-03-17T18:45:15.7883146Z :meth:`torch.nn.SyncBatchNorm.convert_sync_batchnorm()` to convert 2025-03-17T18:45:15.7883772Z :attr:`BatchNorm*D` layer to :class:`SyncBatchNorm` before wrapping 2025-03-17T18:45:15.7884266Z Network with DDP. 2025-03-17T18:45:15.7884449Z 2025-03-17T18:45:15.7884541Z Args: 2025-03-17T18:45:15.7884853Z num_features: :math:`C` from an expected input of size 2025-03-17T18:45:15.7885315Z :math:`(N, C, +)` 2025-03-17T18:45:15.7885759Z eps: a value added to the denominator for numerical stability. 2025-03-17T18:45:15.7886195Z Default: ``1e-5`` 2025-03-17T18:45:15.7886642Z momentum: the value used for the running_mean and running_var 2025-03-17T18:45:15.7887276Z computation. Can be set to ``None`` for cumulative moving average 2025-03-17T18:45:15.7887746Z (i.e. simple average). Default: 0.1 2025-03-17T18:45:15.7888253Z affine: a boolean value that when set to ``True``, this module has 2025-03-17T18:45:15.7888791Z learnable affine parameters. Default: ``True`` 2025-03-17T18:45:15.7889298Z track_running_stats: a boolean value that when set to ``True``, this 2025-03-17T18:45:15.7889940Z module tracks the running mean and variance, and when set to ``False``, 2025-03-17T18:45:15.7890588Z this module does not track such statistics, and initializes statistics 2025-03-17T18:45:15.7891224Z buffers :attr:`running_mean` and :attr:`running_var` as ``None``. 2025-03-17T18:45:15.7891834Z When these buffers are ``None``, this module always uses batch statistics. 2025-03-17T18:45:15.7892363Z in both training and eval modes. Default: ``True`` 2025-03-17T18:45:15.7892959Z process_group: synchronization of stats happen within each process group 2025-03-17T18:45:15.7893635Z individually. Default behavior is synchronization across the whole 2025-03-17T18:45:15.7894115Z world 2025-03-17T18:45:15.7894299Z 2025-03-17T18:45:15.7894395Z Shape: 2025-03-17T18:45:15.7894645Z - Input: :math:`(N, C, +)` 2025-03-17T18:45:15.7895063Z - Output: :math:`(N, C, +)` (same shape as input) 2025-03-17T18:45:15.7895342Z 2025-03-17T18:45:15.7895439Z .. note:: 2025-03-17T18:45:15.7895889Z Synchronization of batchnorm statistics occurs only while training, i.e. 2025-03-17T18:45:15.7896612Z synchronization is disabled when ``model.eval()`` is set or if 2025-03-17T18:45:15.7897088Z ``self.training`` is otherwise ``False``. 2025-03-17T18:45:15.7897406Z 2025-03-17T18:45:15.7897507Z Examples:: 2025-03-17T18:45:15.7897663Z 2025-03-17T18:45:15.7897772Z >>> # xdoctest: +SKIP 2025-03-17T18:45:15.7898109Z >>> # With Learnable Parameters 2025-03-17T18:45:15.7898504Z >>> m = nn.SyncBatchNorm(100) 2025-03-17T18:45:15.7898864Z >>> # creating process group (optional) 2025-03-17T18:45:15.7899264Z >>> # ranks is a list of int identifying rank ids. 2025-03-17T18:45:15.7899702Z >>> ranks = list(range(8)) 2025-03-17T18:45:15.7900023Z >>> r1, r2 = ranks[:4], ranks[4:] 2025-03-17T18:45:15.7900406Z >>> # Note: every rank calls into new_group for every 2025-03-17T18:45:15.7900835Z >>> # process group created, even if that rank is not 2025-03-17T18:45:15.7901227Z >>> # part of the group. 2025-03-17T18:45:15.7901715Z >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] 2025-03-17T18:45:15.7902323Z >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] 2025-03-17T18:45:15.7902778Z >>> # Without Learnable Parameters 2025-03-17T18:45:15.7903227Z >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) 2025-03-17T18:45:15.7903696Z >>> input = torch.randn(20, 100, 35, 45, 10) 2025-03-17T18:45:15.7904049Z >>> output = m(input) 2025-03-17T18:45:15.7904248Z 2025-03-17T18:45:15.7904370Z >>> # network is nn.BatchNorm layer 2025-03-17T18:45:15.7904932Z >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) 2025-03-17T18:45:15.7905575Z >>> # only single gpu per process is currently supported 2025-03-17T18:45:15.7906097Z >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( 2025-03-17T18:45:15.7906635Z >>> sync_bn_network, 2025-03-17T18:45:15.7907020Z >>> device_ids=[args.local_rank], 2025-03-17T18:45:15.7907414Z >>> output_device=args.local_rank) 2025-03-17T18:45:15.7907764Z 2025-03-17T18:45:15.7908153Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.7908598Z 2025-03-17T18:45:15.7909307Z msg = Cannot scrape callname=SyncBatchNorm.convert_sync_batchnorm in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py line=825. 2025-03-17T18:45:15.7910361Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.7911074Z Converts all :attr:`BatchNorm*D` layers in the model to :class:`torch.nn.SyncBatchNorm` layers. 2025-03-17T18:45:15.7911521Z 2025-03-17T18:45:15.7911613Z Args: 2025-03-17T18:45:15.7912008Z module (nn.Module): module containing one or more :attr:`BatchNorm*D` layers 2025-03-17T18:45:15.7912681Z process_group (optional): process group to scope synchronization, 2025-03-17T18:45:15.7913150Z default is the whole world 2025-03-17T18:45:15.7913375Z 2025-03-17T18:45:15.7913482Z Returns: 2025-03-17T18:45:15.7913895Z The original :attr:`module` with the converted :class:`torch.nn.SyncBatchNorm` 2025-03-17T18:45:15.7914512Z layers. If the original :attr:`module` is a :attr:`BatchNorm*D` layer, 2025-03-17T18:45:15.7915086Z a new :class:`torch.nn.SyncBatchNorm` layer object will be returned 2025-03-17T18:45:15.7915527Z instead. 2025-03-17T18:45:15.7915684Z 2025-03-17T18:45:15.7915799Z Example:: 2025-03-17T18:45:15.7915945Z 2025-03-17T18:45:15.7916081Z >>> # Network with nn.BatchNorm layer 2025-03-17T18:45:15.7916478Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:15.7916868Z >>> module = torch.nn.Sequential( 2025-03-17T18:45:15.7917234Z >>> torch.nn.Linear(20, 100), 2025-03-17T18:45:15.7917707Z >>> torch.nn.BatchNorm1d(100), 2025-03-17T18:45:15.7918072Z >>> ).cuda() 2025-03-17T18:45:15.7918394Z >>> # creating process group (optional) 2025-03-17T18:45:15.7918799Z >>> # ranks is a list of int identifying rank ids. 2025-03-17T18:45:15.7919187Z >>> ranks = list(range(8)) 2025-03-17T18:45:15.7919523Z >>> r1, r2 = ranks[:4], ranks[4:] 2025-03-17T18:45:15.7919917Z >>> # Note: every rank calls into new_group for every 2025-03-17T18:45:15.7920351Z >>> # process group created, even if that rank is not 2025-03-17T18:45:15.7920740Z >>> # part of the group. 2025-03-17T18:45:15.7921082Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:15.7921579Z >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] 2025-03-17T18:45:15.7922170Z >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] 2025-03-17T18:45:15.7922809Z >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group) 2025-03-17T18:45:15.7923245Z 2025-03-17T18:45:15.7923334Z 2025-03-17T18:45:15.7923724Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.7924103Z 2025-03-17T18:45:15.8114826Z msg = Cannot scrape callname=Unflatten in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/flatten.py line=60. 2025-03-17T18:45:15.8115827Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.8116458Z 2025-03-17T18:45:15.8116780Z Unflattens a tensor dim expanding it to a desired shape. For use with :class:`~nn.Sequential`. 2025-03-17T18:45:15.8117288Z 2025-03-17T18:45:15.8117569Z * :attr:`dim` specifies the dimension of the input tensor to be unflattened, and it can 2025-03-17T18:45:15.8118271Z be either `int` or `str` when `Tensor` or `NamedTensor` is used, respectively. 2025-03-17T18:45:15.8118708Z 2025-03-17T18:45:15.8119035Z * :attr:`unflattened_size` is the new shape of the unflattened dimension of the tensor and it can be 2025-03-17T18:45:15.8119808Z a `tuple` of ints or a `list` of ints or `torch.Size` for `Tensor` input; a `NamedShape` 2025-03-17T18:45:15.8120499Z (tuple of `(name, size)` tuples) for `NamedTensor` input. 2025-03-17T18:45:15.8120853Z 2025-03-17T18:45:15.8120945Z Shape: 2025-03-17T18:45:15.8121303Z - Input: :math:`(*, S_{\text{dim}}, *)`, where :math:`S_{\text{dim}}` is the size at 2025-03-17T18:45:15.8121966Z dimension :attr:`dim` and :math:`*` means any number of dimensions including none. 2025-03-17T18:45:15.8122634Z - Output: :math:`(*, U_1, ..., U_n, *)`, where :math:`U` = :attr:`unflattened_size` and 2025-03-17T18:45:15.8123167Z :math:`\prod_{i=1}^n U_i = S_{\text{dim}}`. 2025-03-17T18:45:15.8123414Z 2025-03-17T18:45:15.8123517Z Args: 2025-03-17T18:45:15.8123832Z dim (Union[int, str]): Dimension to be unflattened 2025-03-17T18:45:15.8124489Z unflattened_size (Union[torch.Size, Tuple, List, NamedShape]): New shape of the unflattened dimension 2025-03-17T18:45:15.8125024Z 2025-03-17T18:45:15.8125147Z Examples: 2025-03-17T18:45:15.8125452Z >>> input = torch.randn(2, 50) 2025-03-17T18:45:15.8125783Z >>> # With tuple of ints 2025-03-17T18:45:15.8126097Z >>> m = nn.Sequential( 2025-03-17T18:45:15.8126425Z >>> nn.Linear(50, 50), 2025-03-17T18:45:15.8126729Z >>> nn.Unflatten(1, (2, 5, 5)) 2025-03-17T18:45:15.8127086Z >>> ) 2025-03-17T18:45:15.8127320Z >>> output = m(input) 2025-03-17T18:45:15.8127613Z >>> output.size() 2025-03-17T18:45:15.8127925Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:15.8128215Z >>> # With torch.Size 2025-03-17T18:45:15.8128551Z >>> m = nn.Sequential( 2025-03-17T18:45:15.8128835Z >>> nn.Linear(50, 50), 2025-03-17T18:45:15.8129169Z >>> nn.Unflatten(1, torch.Size([2, 5, 5])) 2025-03-17T18:45:15.8129547Z >>> ) 2025-03-17T18:45:15.8129899Z >>> output = m(input) 2025-03-17T18:45:15.8130237Z >>> output.size() 2025-03-17T18:45:15.8130513Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:15.8130888Z >>> # With namedshape (tuple of tuples) 2025-03-17T18:45:15.8131289Z >>> input = torch.randn(2, 50, names=('N', 'features')) 2025-03-17T18:45:15.8131838Z >>> unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5))) 2025-03-17T18:45:15.8132340Z >>> output = unflatten(input) 2025-03-17T18:45:15.8132643Z >>> output.size() 2025-03-17T18:45:15.8132914Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:15.8133168Z 2025-03-17T18:45:15.8133431Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.8133875Z 2025-03-17T18:45:15.8451006Z msg = Cannot scrape callname=TripletMarginWithDistanceLoss in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py line=1700. 2025-03-17T18:45:15.8452138Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.8452754Z Creates a criterion that measures the triplet loss given input 2025-03-17T18:45:15.8453292Z tensors :math:`a`, :math:`p`, and :math:`n` (representing anchor, 2025-03-17T18:45:15.8453846Z positive, and negative examples, respectively), and a nonnegative, 2025-03-17T18:45:15.8454458Z real-valued function ("distance function") used to compute the relationship 2025-03-17T18:45:15.8455074Z between the anchor and positive example ("positive distance") and the 2025-03-17T18:45:15.8455592Z anchor and negative example ("negative distance"). 2025-03-17T18:45:15.8455994Z 2025-03-17T18:45:15.8456225Z The unreduced loss (i.e., with :attr:`reduction` set to ``'none'``) 2025-03-17T18:45:15.8456673Z can be described as: 2025-03-17T18:45:15.8456849Z 2025-03-17T18:45:15.8456972Z .. math:: 2025-03-17T18:45:15.8457252Z \ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad 2025-03-17T18:45:15.8457690Z l_i = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\} 2025-03-17T18:45:15.8457976Z 2025-03-17T18:45:15.8458230Z where :math:`N` is the batch size; :math:`d` is a nonnegative, real-valued function 2025-03-17T18:45:15.8458919Z quantifying the closeness of two tensors, referred to as the :attr:`distance_function`; 2025-03-17T18:45:15.8459655Z and :math:`margin` is a nonnegative margin representing the minimum difference 2025-03-17T18:45:15.8460291Z between the positive and negative distances that is required for the loss to 2025-03-17T18:45:15.8460913Z be 0. The input tensors have :math:`N` elements each and can be of any shape 2025-03-17T18:45:15.8461411Z that the distance function can handle. 2025-03-17T18:45:15.8461660Z 2025-03-17T18:45:15.8461783Z If :attr:`reduction` is not ``'none'`` 2025-03-17T18:45:15.8462131Z (default ``'mean'``), then: 2025-03-17T18:45:15.8462337Z 2025-03-17T18:45:15.8462432Z .. math:: 2025-03-17T18:45:15.8462674Z \ell(x, y) = 2025-03-17T18:45:15.8462941Z \begin{cases} 2025-03-17T18:45:15.8463322Z \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ 2025-03-17T18:45:15.8463857Z \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} 2025-03-17T18:45:15.8464279Z \end{cases} 2025-03-17T18:45:15.8464428Z 2025-03-17T18:45:15.8464685Z See also :class:`~torch.nn.TripletMarginLoss`, which computes the triplet 2025-03-17T18:45:15.8465315Z loss for input tensors using the :math:`l_p` distance as the distance function. 2025-03-17T18:45:15.8465696Z 2025-03-17T18:45:15.8465803Z Args: 2025-03-17T18:45:15.8466223Z distance_function (Callable, optional): A nonnegative, real-valued function that 2025-03-17T18:45:15.8466914Z quantifies the closeness of two tensors. If not specified, 2025-03-17T18:45:15.8467419Z `nn.PairwiseDistance` will be used. Default: ``None`` 2025-03-17T18:45:15.8467995Z margin (float, optional): A nonnegative margin representing the minimum difference 2025-03-17T18:45:15.8468786Z between the positive and negative distances required for the loss to be 0. Larger 2025-03-17T18:45:15.8469485Z margins penalize cases where the negative examples are not distant enough from the 2025-03-17T18:45:15.8470079Z anchors, relative to the positives. Default: :math:`1`. 2025-03-17T18:45:15.8470644Z swap (bool, optional): Whether to use the distance swap described in the paper 2025-03-17T18:45:15.8471298Z `Learning shallow convolutional feature descriptors with triplet losses` by 2025-03-17T18:45:15.8471940Z V. Balntas, E. Riba et al. If True, and if the positive example is closer to the 2025-03-17T18:45:15.8472592Z negative example than the anchor is, swaps the positive example and the anchor in 2025-03-17T18:45:15.8473135Z the loss computation. Default: ``False``. 2025-03-17T18:45:15.8473669Z reduction (str, optional): Specifies the (optional) reduction to apply to the output: 2025-03-17T18:45:15.8474282Z ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, 2025-03-17T18:45:15.8474789Z ``'mean'``: the sum of the output will be divided by the number of 2025-03-17T18:45:15.8475353Z elements in the output, ``'sum'``: the output will be summed. Default: ``'mean'`` 2025-03-17T18:45:15.8475737Z 2025-03-17T18:45:15.8475741Z 2025-03-17T18:45:15.8475836Z Shape: 2025-03-17T18:45:15.8476228Z - Input: :math:`(N, *)` where :math:`*` represents any number of additional dimensions 2025-03-17T18:45:15.8476735Z as supported by the distance function. 2025-03-17T18:45:15.8477282Z - Output: A Tensor of shape :math:`(N)` if :attr:`reduction` is ``'none'``, or a scalar 2025-03-17T18:45:15.8477765Z otherwise. 2025-03-17T18:45:15.8477933Z 2025-03-17T18:45:15.8478031Z Examples:: 2025-03-17T18:45:15.8478184Z 2025-03-17T18:45:15.8478293Z >>> # Initialize embeddings 2025-03-17T18:45:15.8478622Z >>> embedding = nn.Embedding(1000, 128) 2025-03-17T18:45:15.8478993Z >>> anchor_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:15.8479381Z >>> positive_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:15.8479768Z >>> negative_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:15.8480174Z >>> anchor = embedding(anchor_ids) 2025-03-17T18:45:15.8480531Z >>> positive = embedding(positive_ids) 2025-03-17T18:45:15.8480894Z >>> negative = embedding(negative_ids) 2025-03-17T18:45:15.8481226Z >>> 2025-03-17T18:45:15.8481473Z >>> # Built-in Distance Function 2025-03-17T18:45:15.8481806Z >>> triplet_loss = \ 2025-03-17T18:45:15.8482273Z >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance()) 2025-03-17T18:45:15.8482855Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:15.8483240Z >>> output.backward() 2025-03-17T18:45:15.8483501Z >>> 2025-03-17T18:45:15.8483741Z >>> # Custom Distance Function 2025-03-17T18:45:15.8484067Z >>> def l_infinity(x1, x2): 2025-03-17T18:45:15.8484425Z >>> return torch.max(torch.abs(x1 - x2), dim=1).values 2025-03-17T18:45:15.8484796Z >>> 2025-03-17T18:45:15.8485123Z >>> # xdoctest: +SKIP("FIXME: Would call backwards a second time") 2025-03-17T18:45:15.8485550Z >>> triplet_loss = ( 2025-03-17T18:45:15.8486019Z >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5)) 2025-03-17T18:45:15.8486593Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:15.8486978Z >>> output.backward() 2025-03-17T18:45:15.8487261Z >>> 2025-03-17T18:45:15.8487514Z >>> # Custom Distance Function (Lambda) 2025-03-17T18:45:15.8487855Z >>> triplet_loss = ( 2025-03-17T18:45:15.8488173Z >>> nn.TripletMarginWithDistanceLoss( 2025-03-17T18:45:15.8488651Z >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y))) 2025-03-17T18:45:15.8489159Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:15.8489613Z >>> output.backward() 2025-03-17T18:45:15.8489794Z 2025-03-17T18:45:15.8489904Z Reference: 2025-03-17T18:45:15.8490352Z V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: 2025-03-17T18:45:15.8491036Z https://bmva-archive.org.uk/bmvc/2016/papers/paper119/index.html 2025-03-17T18:45:15.8491481Z 2025-03-17T18:45:15.8491871Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 17)) 2025-03-17T18:45:15.8492267Z 2025-03-17T18:45:15.9014410Z msg = Cannot scrape callname=MaxUnpool2d in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py line=395. 2025-03-17T18:45:15.9015407Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.9015968Z Computes a partial inverse of :class:`MaxPool2d`. 2025-03-17T18:45:15.9016255Z 2025-03-17T18:45:15.9016644Z :class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost. 2025-03-17T18:45:15.9017041Z 2025-03-17T18:45:15.9017289Z :class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d` 2025-03-17T18:45:15.9017901Z including the indices of the maximal values and computes a partial inverse 2025-03-17T18:45:15.9018425Z in which all non-maximal values are set to zero. 2025-03-17T18:45:15.9018697Z 2025-03-17T18:45:15.9018808Z Note: 2025-03-17T18:45:15.9019272Z This operation may behave nondeterministically when the input indices has repeat values. 2025-03-17T18:45:15.9020121Z See https://github.com/pytorch/pytorch/issues/80827 and :doc:`/notes/randomness` for more information. 2025-03-17T18:45:15.9020828Z 2025-03-17T18:45:15.9021099Z .. note:: :class:`MaxPool2d` can map several input sizes to the same output 2025-03-17T18:45:15.9021649Z sizes. Hence, the inversion process can get ambiguous. 2025-03-17T18:45:15.9022144Z To accommodate this, you can provide the needed output size 2025-03-17T18:45:15.9022695Z as an additional argument :attr:`output_size` in the forward call. 2025-03-17T18:45:15.9023185Z See the Inputs and Example below. 2025-03-17T18:45:15.9023437Z 2025-03-17T18:45:15.9023529Z Args: 2025-03-17T18:45:15.9023860Z kernel_size (int or tuple): Size of the max pooling window. 2025-03-17T18:45:15.9024420Z stride (int or tuple): Stride of the max pooling window. 2025-03-17T18:45:15.9024868Z It is set to :attr:`kernel_size` by default. 2025-03-17T18:45:15.9025328Z padding (int or tuple): Padding that was added to the input 2025-03-17T18:45:15.9025650Z 2025-03-17T18:45:15.9025745Z Inputs: 2025-03-17T18:45:15.9026017Z - `input`: the input Tensor to invert 2025-03-17T18:45:15.9026559Z - `indices`: the indices given out by :class:`~torch.nn.MaxPool2d` 2025-03-17T18:45:15.9027071Z - `output_size` (optional): the targeted output size 2025-03-17T18:45:15.9027365Z 2025-03-17T18:45:15.9027464Z Shape: 2025-03-17T18:45:15.9027799Z - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. 2025-03-17T18:45:15.9028342Z - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where 2025-03-17T18:45:15.9028698Z 2025-03-17T18:45:15.9028802Z .. math:: 2025-03-17T18:45:15.9029232Z H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} 2025-03-17T18:45:15.9029624Z 2025-03-17T18:45:15.9029732Z .. math:: 2025-03-17T18:45:15.9030151Z W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} 2025-03-17T18:45:15.9030534Z 2025-03-17T18:45:15.9030719Z or as given by :attr:`output_size` in the call operator 2025-03-17T18:45:15.9031007Z 2025-03-17T18:45:15.9031113Z Example:: 2025-03-17T18:45:15.9031249Z 2025-03-17T18:45:15.9031428Z >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) 2025-03-17T18:45:15.9031957Z >>> unpool = nn.MaxUnpool2d(2, stride=2) 2025-03-17T18:45:15.9032350Z >>> input = torch.tensor([[[[ 1., 2., 3., 4.], 2025-03-17T18:45:15.9032724Z [ 5., 6., 7., 8.], 2025-03-17T18:45:15.9033072Z [ 9., 10., 11., 12.], 2025-03-17T18:45:15.9033424Z [13., 14., 15., 16.]]]]) 2025-03-17T18:45:15.9033783Z >>> output, indices = pool(input) 2025-03-17T18:45:15.9034130Z >>> unpool(output, indices) 2025-03-17T18:45:15.9034455Z tensor([[[[ 0., 0., 0., 0.], 2025-03-17T18:45:15.9034773Z [ 0., 6., 0., 8.], 2025-03-17T18:45:15.9035093Z [ 0., 0., 0., 0.], 2025-03-17T18:45:15.9035416Z [ 0., 14., 0., 16.]]]]) 2025-03-17T18:45:15.9035860Z >>> # Now using output_size to resolve an ambiguous size for the inverse 2025-03-17T18:45:15.9036359Z >>> input = torch.tensor([[[[ 1., 2., 3., 4., 5.], 2025-03-17T18:45:15.9036937Z [ 6., 7., 8., 9., 10.], 2025-03-17T18:45:15.9037324Z [11., 12., 13., 14., 15.], 2025-03-17T18:45:15.9037681Z [16., 17., 18., 19., 20.]]]]) 2025-03-17T18:45:15.9038049Z >>> output, indices = pool(input) 2025-03-17T18:45:15.9038464Z >>> # This call will not work without specifying output_size 2025-03-17T18:45:15.9038931Z >>> unpool(output, indices, output_size=input.size()) 2025-03-17T18:45:15.9039327Z tensor([[[[ 0., 0., 0., 0., 0.], 2025-03-17T18:45:15.9039721Z [ 0., 7., 0., 9., 0.], 2025-03-17T18:45:15.9040046Z [ 0., 0., 0., 0., 0.], 2025-03-17T18:45:15.9040375Z [ 0., 17., 0., 19., 0.]]]]) 2025-03-17T18:45:15.9040596Z 2025-03-17T18:45:15.9040601Z 2025-03-17T18:45:15.9040706Z 2025-03-17T18:45:15.9041103Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.9041486Z 2025-03-17T18:45:15.9288127Z msg = Cannot scrape callname=EmbeddingBag in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py line=270. 2025-03-17T18:45:15.9289081Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.9289925Z Compute sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings. 2025-03-17T18:45:15.9290379Z 2025-03-17T18:45:15.9290728Z For bags of constant length, no :attr:`per_sample_weights`, no indices equal to :attr:`padding_idx`, 2025-03-17T18:45:15.9291313Z and with 2D inputs, this class 2025-03-17T18:45:15.9291525Z 2025-03-17T18:45:15.9291859Z * with ``mode="sum"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=1)``, 2025-03-17T18:45:15.9292624Z * with ``mode="mean"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=1)``, 2025-03-17T18:45:15.9293393Z * with ``mode="max"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=1)``. 2025-03-17T18:45:15.9293821Z 2025-03-17T18:45:15.9294196Z However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these 2025-03-17T18:45:15.9294782Z operations. 2025-03-17T18:45:15.9294927Z 2025-03-17T18:45:15.9295205Z EmbeddingBag also supports per-sample weights as an argument to the forward 2025-03-17T18:45:15.9295844Z pass. This scales the output of the Embedding before performing a weighted 2025-03-17T18:45:15.9296502Z reduction as specified by ``mode``. If :attr:`per_sample_weights` is passed, the 2025-03-17T18:45:15.9297140Z only supported ``mode`` is ``"sum"``, which computes a weighted sum according to 2025-03-17T18:45:15.9297623Z :attr:`per_sample_weights`. 2025-03-17T18:45:15.9297822Z 2025-03-17T18:45:15.9297925Z Args: 2025-03-17T18:45:15.9298342Z num_embeddings (int): size of the dictionary of embeddings 2025-03-17T18:45:15.9298833Z embedding_dim (int): the size of each embedding vector 2025-03-17T18:45:15.9299456Z max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` 2025-03-17T18:45:15.9300070Z is renormalized to have norm :attr:`max_norm`. 2025-03-17T18:45:15.9300709Z norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. 2025-03-17T18:45:15.9301518Z scale_grad_by_freq (bool, optional): if given, this will scale gradients by the inverse of frequency of 2025-03-17T18:45:15.9302156Z the words in the mini-batch. Default ``False``. 2025-03-17T18:45:15.9302643Z Note: this option is not supported when ``mode="max"``. 2025-03-17T18:45:15.9303220Z mode (str, optional): ``"sum"``, ``"mean"`` or ``"max"``. Specifies the way to reduce the bag. 2025-03-17T18:45:15.9303834Z ``"sum"`` computes the weighted sum, taking :attr:`per_sample_weights` 2025-03-17T18:45:15.9304418Z into consideration. ``"mean"`` computes the average of the values 2025-03-17T18:45:15.9304958Z in the bag, ``"max"`` computes the max value over each bag. 2025-03-17T18:45:15.9305390Z Default: ``"mean"`` 2025-03-17T18:45:15.9305965Z sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See 2025-03-17T18:45:15.9306838Z Notes for more details regarding sparse gradients. Note: this option is not 2025-03-17T18:45:15.9307357Z supported when ``mode="max"``. 2025-03-17T18:45:15.9308003Z include_last_offset (bool, optional): if ``True``, :attr:`offsets` has one additional element, where the last element 2025-03-17T18:45:15.9308753Z is equivalent to the size of `indices`. This matches the CSR format. 2025-03-17T18:45:15.9309455Z padding_idx (int, optional): If specified, the entries at :attr:`padding_idx` do not contribute to the 2025-03-17T18:45:15.9310205Z gradient; therefore, the embedding vector at :attr:`padding_idx` is not updated 2025-03-17T18:45:15.9310911Z during training, i.e. it remains as a fixed "pad". For a newly constructed 2025-03-17T18:45:15.9311566Z EmbeddingBag, the embedding vector at :attr:`padding_idx` will default to all 2025-03-17T18:45:15.9312228Z zeros, but can be updated to another value to be used as the padding vector. 2025-03-17T18:45:15.9312866Z Note that the embedding vector at :attr:`padding_idx` is excluded from the 2025-03-17T18:45:15.9313370Z reduction. 2025-03-17T18:45:15.9313613Z 2025-03-17T18:45:15.9313712Z Attributes: 2025-03-17T18:45:15.9314199Z weight (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` 2025-03-17T18:45:15.9314805Z initialized from :math:`\mathcal{N}(0, 1)`. 2025-03-17T18:45:15.9315089Z 2025-03-17T18:45:15.9315201Z Examples:: 2025-03-17T18:45:15.9315357Z 2025-03-17T18:45:15.9315539Z >>> # an EmbeddingBag module containing 10 tensors of size 3 2025-03-17T18:45:15.9316011Z >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') 2025-03-17T18:45:15.9316441Z >>> # a batch of 2 samples of 4 indices each 2025-03-17T18:45:15.9316892Z >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long) 2025-03-17T18:45:15.9317374Z >>> offsets = torch.tensor([0, 4], dtype=torch.long) 2025-03-17T18:45:15.9317804Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:15.9318265Z >>> embedding_sum(input, offsets) 2025-03-17T18:45:15.9318622Z tensor([[-0.8861, -5.4350, -0.0523], 2025-03-17T18:45:15.9318964Z [ 1.1306, -2.5798, -1.0044]]) 2025-03-17T18:45:15.9319185Z 2025-03-17T18:45:15.9319319Z >>> # Example with padding_idx 2025-03-17T18:45:15.9319762Z >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2) 2025-03-17T18:45:15.9320297Z >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long) 2025-03-17T18:45:15.9320778Z >>> offsets = torch.tensor([0, 4], dtype=torch.long) 2025-03-17T18:45:15.9321173Z >>> embedding_sum(input, offsets) 2025-03-17T18:45:15.9321517Z tensor([[ 0.0000, 0.0000, 0.0000], 2025-03-17T18:45:15.9321848Z [-0.7082, 3.2145, -2.6251]]) 2025-03-17T18:45:15.9322063Z 2025-03-17T18:45:15.9322258Z >>> # An EmbeddingBag can be loaded from an Embedding like so 2025-03-17T18:45:15.9322717Z >>> embedding = nn.Embedding(10, 3, padding_idx=2) 2025-03-17T18:45:15.9323242Z >>> embedding_sum = nn.EmbeddingBag.from_pretrained( 2025-03-17T18:45:15.9323635Z embedding.weight, 2025-03-17T18:45:15.9323968Z padding_idx=embedding.padding_idx, 2025-03-17T18:45:15.9324324Z mode='sum') 2025-03-17T18:45:15.9324602Z 2025-03-17T18:45:15.9324988Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.9325382Z 2025-03-17T18:45:15.9653963Z msg = Cannot scrape callname=DistributedDataParallel.join in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py line=1742. 2025-03-17T18:45:15.9655368Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.9655766Z 2025-03-17T18:45:15.9656079Z Context manager for training with uneven inputs across processes in DDP. 2025-03-17T18:45:15.9656446Z 2025-03-17T18:45:15.9656688Z This context manager will keep track of already-joined DDP processes, 2025-03-17T18:45:15.9657335Z and "shadow" the forward and backward passes by inserting collective 2025-03-17T18:45:15.9657977Z communication operations to match with the ones created by non-joined 2025-03-17T18:45:15.9658637Z DDP processes. This will ensure each collective call has a corresponding 2025-03-17T18:45:15.9659354Z call by already-joined DDP processes, preventing hangs or errors that 2025-03-17T18:45:15.9659962Z would otherwise happen when training with uneven inputs across 2025-03-17T18:45:15.9660555Z processes. Alternatively, if the flag ``throw_on_early_termination`` is 2025-03-17T18:45:15.9661185Z specified to be ``True``, all trainers will throw an error once one rank 2025-03-17T18:45:15.9661793Z runs out of inputs, allowing these errors to be caught and handled 2025-03-17T18:45:15.9662295Z according to application logic. 2025-03-17T18:45:15.9662512Z 2025-03-17T18:45:15.9662742Z Once all DDP processes have joined, the context manager will broadcast 2025-03-17T18:45:15.9663393Z the model corresponding to the last joined process to all processes to 2025-03-17T18:45:15.9663967Z ensure the model is the same across all processes 2025-03-17T18:45:15.9664359Z (which is guaranteed by DDP). 2025-03-17T18:45:15.9664623Z 2025-03-17T18:45:15.9664834Z To use this to enable training with uneven inputs across processes, 2025-03-17T18:45:15.9665451Z simply wrap this context manager around your training loop. No further 2025-03-17T18:45:15.9666037Z modifications to the model or data loading is required. 2025-03-17T18:45:15.9666348Z 2025-03-17T18:45:15.9666522Z .. warning:: 2025-03-17T18:45:15.9666942Z If the model or training loop this context manager is wrapped around 2025-03-17T18:45:15.9667515Z has additional distributed collective operations, such as 2025-03-17T18:45:15.9668040Z ``SyncBatchNorm`` in the model's forward pass, then the flag 2025-03-17T18:45:15.9668629Z ``throw_on_early_termination`` must be enabled. This is because this 2025-03-17T18:45:15.9669358Z context manager is not aware of non-DDP collective communication. 2025-03-17T18:45:15.9669938Z This flag will cause all ranks to throw when any one rank 2025-03-17T18:45:15.9670462Z exhausts inputs, allowing these errors to be caught and recovered 2025-03-17T18:45:15.9670969Z from across all ranks. 2025-03-17T18:45:15.9671153Z 2025-03-17T18:45:15.9671266Z Args: 2025-03-17T18:45:15.9671634Z divide_by_initial_world_size (bool): If ``True``, will divide 2025-03-17T18:45:15.9672218Z gradients by the initial ``world_size`` DDP training was launched 2025-03-17T18:45:15.9672731Z with. If ``False``, will compute the effective world size 2025-03-17T18:45:15.9673280Z (number of ranks that have not depleted their inputs yet) and 2025-03-17T18:45:15.9673789Z divide gradients by that during allreduce. Set 2025-03-17T18:45:15.9674246Z ``divide_by_initial_world_size=True`` to ensure every input 2025-03-17T18:45:15.9674826Z sample including the uneven inputs have equal weight in terms of 2025-03-17T18:45:15.9675404Z how much they contribute to the global gradient. This is 2025-03-17T18:45:15.9675914Z achieved by always dividing the gradient by the initial 2025-03-17T18:45:15.9676427Z ``world_size`` even when we encounter uneven inputs. If you set 2025-03-17T18:45:15.9676979Z this to ``False``, we divide the gradient by the remaining 2025-03-17T18:45:15.9677540Z number of nodes. This ensures parity with training on a smaller 2025-03-17T18:45:15.9678059Z ``world_size`` although it also means the uneven inputs would 2025-03-17T18:45:15.9678632Z contribute more towards the global gradient. Typically, you 2025-03-17T18:45:15.9679248Z would want to set this to ``True`` for cases where the last few 2025-03-17T18:45:15.9679839Z inputs of your training job are uneven. In extreme cases, where 2025-03-17T18:45:15.9680372Z there is a large discrepancy in the number of inputs, setting 2025-03-17T18:45:15.9680968Z this to ``False`` might provide better results. 2025-03-17T18:45:15.9681582Z enable (bool): Whether to enable uneven input detection or not. Pass 2025-03-17T18:45:15.9682450Z in ``enable=False`` to disable in cases where you know that 2025-03-17T18:45:15.9683206Z inputs are even across participating processes. Default is 2025-03-17T18:45:15.9683671Z ``True``. 2025-03-17T18:45:15.9684016Z throw_on_early_termination (bool): Whether to throw an error 2025-03-17T18:45:15.9684515Z or continue training when at least one rank has exhausted 2025-03-17T18:45:15.9685015Z inputs. If ``True``, will throw upon the first rank reaching end 2025-03-17T18:45:15.9685514Z of data. If ``False``, will continue training with a smaller 2025-03-17T18:45:15.9686027Z effective world size until all ranks are joined. Note that if 2025-03-17T18:45:15.9686482Z this flag is specified, then the flag 2025-03-17T18:45:15.9686919Z ``divide_by_initial_world_size`` would be ignored. Default 2025-03-17T18:45:15.9687306Z is ``False``. 2025-03-17T18:45:15.9687479Z 2025-03-17T18:45:15.9687484Z 2025-03-17T18:45:15.9687589Z Example:: 2025-03-17T18:45:15.9687732Z 2025-03-17T18:45:15.9687853Z >>> # xdoctest: +SKIP("Distributed") 2025-03-17T18:45:15.9688192Z >>> import torch 2025-03-17T18:45:15.9688485Z >>> import torch.distributed as dist 2025-03-17T18:45:15.9688822Z >>> import os 2025-03-17T18:45:15.9689117Z >>> import torch.multiprocessing as mp 2025-03-17T18:45:15.9689476Z >>> import torch.nn as nn 2025-03-17T18:45:15.9689790Z >>> # On each spawned worker 2025-03-17T18:45:15.9690104Z >>> def worker(rank): 2025-03-17T18:45:15.9690468Z >>> dist.init_process_group("nccl", rank=rank, world_size=2) 2025-03-17T18:45:15.9690894Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:15.9691265Z >>> model = nn.Linear(1, 1, bias=False).to(rank) 2025-03-17T18:45:15.9691704Z >>> model = torch.nn.parallel.DistributedDataParallel( 2025-03-17T18:45:15.9692218Z >>> model, device_ids=[rank], output_device=rank 2025-03-17T18:45:15.9692579Z >>> ) 2025-03-17T18:45:15.9692858Z >>> # Rank 1 gets one more input than rank 0. 2025-03-17T18:45:15.9693306Z >>> inputs = [torch.tensor([1]).float() for _ in range(10 + rank)] 2025-03-17T18:45:15.9693735Z >>> with model.join(): 2025-03-17T18:45:15.9694023Z >>> for _ in range(5): 2025-03-17T18:45:15.9694339Z >>> for inp in inputs: 2025-03-17T18:45:15.9694675Z >>> loss = model(inp).sum() 2025-03-17T18:45:15.9695028Z >>> loss.backward() 2025-03-17T18:45:15.9695443Z >>> # Without the join() API, the below synchronization will hang 2025-03-17T18:45:15.9695912Z >>> # blocking for rank 1's allreduce to complete. 2025-03-17T18:45:15.9696312Z >>> torch.cuda.synchronize(device=rank) 2025-03-17T18:45:15.9696567Z 2025-03-17T18:45:15.9696830Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.9697224Z 2025-03-17T18:45:15.9697981Z msg = Cannot scrape callname=DistributedDataParallel._register_fused_optim in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py line=2033. 2025-03-17T18:45:15.9699102Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:15.9699499Z 2025-03-17T18:45:15.9699816Z Register an optimizer in DDP to optimize parameter immediately after its gradient reduction. 2025-03-17T18:45:15.9700265Z 2025-03-17T18:45:15.9700480Z Registers an optimizer with DDP such that the optimization for a 2025-03-17T18:45:15.9701064Z parameter will run immediately when that parameter's gradient is 2025-03-17T18:45:15.9701610Z finished with reduction, instead of waiting for all parameters' 2025-03-17T18:45:15.9702166Z gradients to finish reduction. This can result in a training speedup 2025-03-17T18:45:15.9702747Z depending on your workload since the optimizer can run while gradient 2025-03-17T18:45:15.9703335Z reduction for other parameters are still ongoing. In addition, this has 2025-03-17T18:45:15.9703931Z the potential to reduce peak memory consumption during training, as it 2025-03-17T18:45:15.9704496Z only needs to load the per-parameter optimizer states of a single 2025-03-17T18:45:15.9705076Z parameter at a time, instead of loading all per-parameter optimizer 2025-03-17T18:45:15.9705513Z states at once. 2025-03-17T18:45:15.9705671Z 2025-03-17T18:45:15.9705761Z Args: 2025-03-17T18:45:15.9706095Z optim (Type): a ``torch.optim.Optimizer`` class to be registered 2025-03-17T18:45:15.9706626Z as a fused optimizer. 2025-03-17T18:45:15.9706998Z *args (Sequence[Any]): Arguments to forward to `optim`. 2025-03-17T18:45:15.9707519Z optim_params (Optional[Iterable[torch.Tensor]]): Set of parameters 2025-03-17T18:45:15.9708098Z to optimize, similar to `params` argument of traditional `torch.optim` 2025-03-17T18:45:15.9708670Z Optimizers. If this is omitted, all DDP model parameters will be 2025-03-17T18:45:15.9709099Z optimized. 2025-03-17T18:45:15.9709451Z **kwargs: (Dict[str, Any]): Keyword arguments to forward to `optim`. 2025-03-17T18:45:15.9709769Z 2025-03-17T18:45:15.9709877Z .. warning :: 2025-03-17T18:45:15.9710241Z _register_fused_optim should only be called once on a DDP instance, 2025-03-17T18:45:15.9710802Z and registering multiple fused optimizers for the same DDP model 2025-03-17T18:45:15.9711262Z is not currently supported. Please ping 2025-03-17T18:45:15.9711756Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:15.9712222Z for your use case. 2025-03-17T18:45:15.9712398Z 2025-03-17T18:45:15.9712495Z .. warning :: 2025-03-17T18:45:15.9712838Z _register_fused_optim and register_comm_hook currently do not 2025-03-17T18:45:15.9713384Z compose together, meaning that custom DDP communication hooks are 2025-03-17T18:45:15.9713994Z not supported with overlapped optimizers. Please ping 2025-03-17T18:45:15.9714530Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:15.9714991Z for your use case. 2025-03-17T18:45:15.9715168Z 2025-03-17T18:45:15.9715262Z .. warning :: 2025-03-17T18:45:15.9715638Z Gradient accumulation and DDP `no_sync` are currently not supported 2025-03-17T18:45:15.9716127Z with overlapped optimizer. Please ping 2025-03-17T18:45:15.9716603Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:15.9717061Z for your use case. 2025-03-17T18:45:15.9717224Z 2025-03-17T18:45:15.9717335Z Example:: 2025-03-17T18:45:15.9717463Z 2025-03-17T18:45:15.9717611Z >>> # xdoctest: +SKIP("No rendezvous handler") 2025-03-17T18:45:15.9718171Z >>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') 2025-03-17T18:45:15.9718813Z >>> net = torch.nn.parallel.DistributedDataParallel(model, pg) 2025-03-17T18:45:15.9719231Z >>> lr = 1e-2 2025-03-17T18:45:15.9719493Z >>> betas = (0.9, 0.99) 2025-03-17T18:45:15.9719778Z >>> eps = 1e-6 2025-03-17T18:45:15.9720165Z >>> net._register_fused_optim(torch.optim.Adam, lr, betas=betas, eps=eps) 2025-03-17T18:45:15.9720636Z >>> # Example with subset of parameters 2025-03-17T18:45:15.9721018Z >>> params_to_opt = [list(net.parameters())[0]] 2025-03-17T18:45:15.9721389Z >>> net._register_fused_optim( 2025-03-17T18:45:15.9721837Z ... torch.optim.Adam, lr, optim_params=params_to_opt, betas=betas, eps=eps 2025-03-17T18:45:15.9722301Z ... ) 2025-03-17T18:45:15.9722437Z 2025-03-17T18:45:15.9722731Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:15.9723119Z 2025-03-17T18:45:16.0179472Z msg = Cannot scrape callname=convert_conv2d_weight_memory_format in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/memory_format.py line=6. 2025-03-17T18:45:16.0180550Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0181170Z Convert ``memory_format`` of ``nn.Conv2d.weight`` to ``memory_format``. 2025-03-17T18:45:16.0181526Z 2025-03-17T18:45:16.0181813Z The conversion recursively applies to nested ``nn.Module``, including ``module``. 2025-03-17T18:45:16.0182637Z Note that it only changes the memory_format, but not the semantics of each dimensions. 2025-03-17T18:45:16.0183326Z This function is used to facilitate the computation to adopt NHWC kernels, which 2025-03-17T18:45:16.0184044Z provides considerable speed up for fp16 data on CUDA devices with compute capability >= 7.0 2025-03-17T18:45:16.0184545Z 2025-03-17T18:45:16.0184659Z .. note:: 2025-03-17T18:45:16.0185083Z Calling ``model.to(memory_format=torch.channels_last)`` is more aggressive 2025-03-17T18:45:16.0185896Z than the utility function ``convert_conv2d_weight_memory_format``. Any 2025-03-17T18:45:16.0186574Z layer with 4d weight will be affected by ``model.to``, which does not 2025-03-17T18:45:16.0187164Z necessarily benefit from conversion to specified ``memory_format``. 2025-03-17T18:45:16.0187759Z One place we are confident in is that NHWC(channels_last) conversion for 2025-03-17T18:45:16.0188352Z convolution in cuDNN, as it is beneficial to run convolution in NHWC, 2025-03-17T18:45:16.0188926Z even in cases where we have to apply permutation to input tensors. 2025-03-17T18:45:16.0189270Z 2025-03-17T18:45:16.0189505Z Hence our strategy here is to convert only the weight of convolution to 2025-03-17T18:45:16.0189985Z channels_last. This ensures that; 2025-03-17T18:45:16.0190451Z 1. Fast convolution kernels will be used, the benefit of which could 2025-03-17T18:45:16.0191046Z outweigh overhead of permutation (if input is not in the same format). 2025-03-17T18:45:16.0191664Z 2. No unnecessary permutations are applied on layers that do not benefit 2025-03-17T18:45:16.0192301Z from memory_format conversion. 2025-03-17T18:45:16.0192537Z 2025-03-17T18:45:16.0192771Z The optimal case is that, layers between convolution layers are channels 2025-03-17T18:45:16.0193377Z last compatible. Input tensor would be permuted to channels last when it 2025-03-17T18:45:16.0193992Z encounters the first convolution layer and stay in that memory format. 2025-03-17T18:45:16.0194601Z Hence following convolutions will not need to permute its input tensor. 2025-03-17T18:45:16.0194980Z 2025-03-17T18:45:16.0195211Z In case where a channels last incompatible layer is between convolution 2025-03-17T18:45:16.0195790Z layers, we need to permute the input tensor back to contiguous format 2025-03-17T18:45:16.0196367Z for that layer. The input tensor will go through the remaining layers in 2025-03-17T18:45:16.0196959Z contiguous format and be permuted to channels last when it encounters 2025-03-17T18:45:16.0197540Z another convolution layer. There's no point in propagating that 2025-03-17T18:45:16.0198108Z permutation to an earlier layer, as most layers are quite agnostic to 2025-03-17T18:45:16.0198568Z ``memory_format``. 2025-03-17T18:45:16.0198748Z 2025-03-17T18:45:16.0199001Z This claim might change when PyTorch supports fusion of permutation, as 2025-03-17T18:45:16.0199602Z there might have been a better spot to fuse the permutation other than 2025-03-17T18:45:16.0200078Z immediately before a convolution. 2025-03-17T18:45:16.0200309Z 2025-03-17T18:45:16.0200414Z Args: 2025-03-17T18:45:16.0200771Z module (nn.Module): ``nn.Conv2d`` & ``nn.ConvTranspose2d`` or container 2025-03-17T18:45:16.0201268Z ``nn.Module`` 2025-03-17T18:45:16.0201648Z memory_format: user specified ``memory_format``, 2025-03-17T18:45:16.0202109Z e.g. ``torch.channels_last`` or ``torch.contiguous_format`` 2025-03-17T18:45:16.0202412Z 2025-03-17T18:45:16.0202519Z Returns: 2025-03-17T18:45:16.0202814Z The original module with updated ``nn.Conv2d`` 2025-03-17T18:45:16.0203075Z 2025-03-17T18:45:16.0203178Z Example: 2025-03-17T18:45:16.0203449Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:16.0203874Z >>> # xdoctest: +REQUIRES(env:CUBLAS_WORKSPACE_CONFIG) 2025-03-17T18:45:16.0204431Z >>> input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float16, device="cuda") 2025-03-17T18:45:16.0204908Z >>> model = nn.Sequential( 2025-03-17T18:45:16.0205241Z >>> nn.Conv2d(8, 4, 3)).cuda().half() 2025-03-17T18:45:16.0205596Z >>> # This is identical to: 2025-03-17T18:45:16.0206062Z >>> # nn.utils.convert_conv2d_weight_memory_format(model, torch.channels_last) 2025-03-17T18:45:16.0206714Z >>> model = nn.utils.convert_conv2d_weight_memory_format(model, torch.channels_last) 2025-03-17T18:45:16.0207220Z >>> out = model(input) 2025-03-17T18:45:16.0207505Z 2025-03-17T18:45:16.0207898Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0208290Z 2025-03-17T18:45:16.0208937Z msg = Cannot scrape callname=convert_conv3d_weight_memory_format in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/memory_format.py line=81. 2025-03-17T18:45:16.0209973Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0210585Z Convert ``memory_format`` of ``nn.Conv3d.weight`` to ``memory_format`` 2025-03-17T18:45:16.0211206Z The conversion recursively applies to nested ``nn.Module``, including ``module``. 2025-03-17T18:45:16.0211897Z Note that it only changes the memory_format, but not the semantics of each dimensions. 2025-03-17T18:45:16.0212573Z This function is used to facilitate the computation to adopt NHWC kernels, which 2025-03-17T18:45:16.0213280Z provides considerable speed up for fp16 data on CUDA devices with compute capability >= 7.0 2025-03-17T18:45:16.0213714Z 2025-03-17T18:45:16.0213883Z .. note:: 2025-03-17T18:45:16.0214280Z Calling ``model.to(memory_format=torch.channels_last_3d)`` is more aggressive 2025-03-17T18:45:16.0214893Z than the utility function ``convert_conv3d_weight_memory_format``. Any 2025-03-17T18:45:16.0215466Z layer with 4d weight will be affected by ``model.to``, which does not 2025-03-17T18:45:16.0216046Z necessarily benefit from conversion to specified ``memory_format``. 2025-03-17T18:45:16.0216646Z One place we are confident in is that NDHWC(channels_last_3d) conversion for 2025-03-17T18:45:16.0217244Z convolution in cuDNN, as it is beneficial to run convolution in NDHWC, 2025-03-17T18:45:16.0217809Z even in cases where we have to apply permutation to input tensors. 2025-03-17T18:45:16.0218141Z 2025-03-17T18:45:16.0218385Z Hence our strategy here is to convert only the weight of convolution to 2025-03-17T18:45:16.0218869Z channels_last_3d. This ensures that; 2025-03-17T18:45:16.0219339Z 1. Fast convolution kernels will be used, the benefit of which could 2025-03-17T18:45:16.0219928Z outweigh overhead of permutation (if input is not in the same format). 2025-03-17T18:45:16.0220535Z 2. No unnecessary permutations are applied on layers that do not benefit 2025-03-17T18:45:16.0221021Z from memory_format conversion. 2025-03-17T18:45:16.0221242Z 2025-03-17T18:45:16.0221488Z The optimal case is that, layers between convolution layers are channels 2025-03-17T18:45:16.0222078Z last compatible. Input tensor would be permuted to channels last when it 2025-03-17T18:45:16.0222722Z encounters the first convolution layer and stay in that memory format. 2025-03-17T18:45:16.0223336Z Hence following convolutions will not need to permute its input tensor. 2025-03-17T18:45:16.0223714Z 2025-03-17T18:45:16.0223945Z In case where a channels last incompatible layer is between convolution 2025-03-17T18:45:16.0224527Z layers, we need to permute the input tensor back to contiguous format 2025-03-17T18:45:16.0225108Z for that layer. The input tensor will go through the remaining layers in 2025-03-17T18:45:16.0225701Z contiguous format and be permuted to channels last when it encounters 2025-03-17T18:45:16.0226432Z another convolution layer. There's no point in propagating that 2025-03-17T18:45:16.0227059Z permutation to an earlier layer, as most layers are quite agnostic to 2025-03-17T18:45:16.0227516Z ``memory_format``. 2025-03-17T18:45:16.0227711Z 2025-03-17T18:45:16.0227953Z This claim might change when PyTorch supports fusion of permutation, as 2025-03-17T18:45:16.0228555Z there might have been a better spot to fuse the permutation other than 2025-03-17T18:45:16.0229049Z immediately before a convolution. 2025-03-17T18:45:16.0229299Z 2025-03-17T18:45:16.0229392Z Args: 2025-03-17T18:45:16.0229754Z module (nn.Module): ``nn.Conv3d`` & ``nn.ConvTranspose3d`` or container 2025-03-17T18:45:16.0230210Z ``nn.Module`` 2025-03-17T18:45:16.0230595Z memory_format: user specified ``memory_format``, 2025-03-17T18:45:16.0231059Z e.g. ``torch.channels_last`` or ``torch.contiguous_format`` 2025-03-17T18:45:16.0231365Z 2025-03-17T18:45:16.0231476Z Returns: 2025-03-17T18:45:16.0231774Z The original module with updated ``nn.Conv3d`` 2025-03-17T18:45:16.0232037Z 2025-03-17T18:45:16.0232146Z Example: 2025-03-17T18:45:16.0232436Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:16.0232866Z >>> # xdoctest: +REQUIRES(env:CUBLAS_WORKSPACE_CONFIG) 2025-03-17T18:45:16.0233391Z >>> input = torch.randint(1, 10, (2, 8, 4, 4, 4), dtype=torch.float16, device="cuda") 2025-03-17T18:45:16.0233869Z >>> model = nn.Sequential( 2025-03-17T18:45:16.0234207Z >>> nn.Conv3d(8, 4, 3)).cuda().half() 2025-03-17T18:45:16.0234559Z >>> # This is identical to: 2025-03-17T18:45:16.0235098Z >>> # nn.utils.convert_conv3d_weight_memory_format(model, torch.channels_last_3d) 2025-03-17T18:45:16.0235758Z >>> model = nn.utils.convert_conv3d_weight_memory_format(model, torch.channels_last_3d) 2025-03-17T18:45:16.0236280Z >>> out = model(input) 2025-03-17T18:45:16.0236575Z 2025-03-17T18:45:16.0237188Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0237584Z 2025-03-17T18:45:16.0413800Z msg = Cannot scrape callname=random_structured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=935. 2025-03-17T18:45:16.0414748Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0415381Z Prune tensor by removing random channels along the specified dimension. 2025-03-17T18:45:16.0415748Z 2025-03-17T18:45:16.0415986Z Prunes tensor corresponding to parameter called ``name`` in ``module`` 2025-03-17T18:45:16.0416583Z by removing the specified ``amount`` of (currently unpruned) channels 2025-03-17T18:45:16.0417077Z along the specified ``dim`` selected at random. 2025-03-17T18:45:16.0417600Z Modifies module in place (and also return the modified module) 2025-03-17T18:45:16.0418122Z by: 2025-03-17T18:45:16.0418262Z 2025-03-17T18:45:16.0418482Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:16.0419054Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:16.0419629Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:16.0420306Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:16.0420746Z ``name+'_orig'``. 2025-03-17T18:45:16.0420932Z 2025-03-17T18:45:16.0421024Z Args: 2025-03-17T18:45:16.0421350Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:16.0421850Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:16.0422265Z will act. 2025-03-17T18:45:16.0422620Z amount (int or float): quantity of parameters to prune. 2025-03-17T18:45:16.0423101Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:16.0423769Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:16.0424462Z absolute number of parameters to prune. 2025-03-17T18:45:16.0425296Z dim (int): index of the dim along which we define channels to prune. 2025-03-17T18:45:16.0425701Z 2025-03-17T18:45:16.0425809Z Returns: 2025-03-17T18:45:16.0426187Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:16.0426596Z 2025-03-17T18:45:16.0426710Z Examples: 2025-03-17T18:45:16.0426967Z >>> # xdoctest: +SKIP 2025-03-17T18:45:16.0427295Z >>> m = prune.random_structured( 2025-03-17T18:45:16.0427679Z ... nn.Linear(5, 3), 'weight', amount=3, dim=1 2025-03-17T18:45:16.0428094Z ... ) 2025-03-17T18:45:16.0428475Z >>> columns_pruned = int(sum(torch.sum(m.weight, dim=0) == 0)) 2025-03-17T18:45:16.0429037Z >>> print(columns_pruned) 2025-03-17T18:45:16.0429326Z 3 2025-03-17T18:45:16.0429546Z 2025-03-17T18:45:16.0429932Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0430323Z 2025-03-17T18:45:16.0430838Z msg = Cannot scrape callname=ln_structured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=976. 2025-03-17T18:45:16.0431743Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0432454Z Prune tensor by removing channels with the lowest L\ ``n``-norm along the specified dimension. 2025-03-17T18:45:16.0432900Z 2025-03-17T18:45:16.0433144Z Prunes tensor corresponding to parameter called ``name`` in ``module`` 2025-03-17T18:45:16.0433856Z by removing the specified ``amount`` of (currently unpruned) channels 2025-03-17T18:45:16.0434394Z along the specified ``dim`` with the lowest L\ ``n``-norm. 2025-03-17T18:45:16.0434906Z Modifies module in place (and also return the modified module) 2025-03-17T18:45:16.0435323Z by: 2025-03-17T18:45:16.0435461Z 2025-03-17T18:45:16.0435679Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:16.0436248Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:16.0436984Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:16.0437553Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:16.0437998Z ``name+'_orig'``. 2025-03-17T18:45:16.0438182Z 2025-03-17T18:45:16.0438273Z Args: 2025-03-17T18:45:16.0438598Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:16.0439098Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:16.0439513Z will act. 2025-03-17T18:45:16.0439868Z amount (int or float): quantity of parameters to prune. 2025-03-17T18:45:16.0440349Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:16.0440867Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:16.0441343Z absolute number of parameters to prune. 2025-03-17T18:45:16.0441808Z n (int, float, inf, -inf, 'fro', 'nuc'): See documentation of valid 2025-03-17T18:45:16.0442296Z entries for argument ``p`` in :func:`torch.norm`. 2025-03-17T18:45:16.0442840Z dim (int): index of the dim along which we define channels to prune. 2025-03-17T18:45:16.0443421Z importance_scores (torch.Tensor): tensor of importance scores (of same 2025-03-17T18:45:16.0479141Z shape as module parameter) used to compute mask for pruning. 2025-03-17T18:45:16.0479930Z The values in this tensor indicate the importance of the corresponding 2025-03-17T18:45:16.0480440Z elements in the parameter being pruned. 2025-03-17T18:45:16.0480941Z If unspecified or None, the module parameter will be used in its place. 2025-03-17T18:45:16.0481318Z 2025-03-17T18:45:16.0481415Z Returns: 2025-03-17T18:45:16.0481917Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:16.0482280Z 2025-03-17T18:45:16.0482379Z Examples: 2025-03-17T18:45:16.0482658Z >>> from torch.nn.utils import prune 2025-03-17T18:45:16.0483019Z >>> m = prune.ln_structured( 2025-03-17T18:45:16.0483435Z ... nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf') 2025-03-17T18:45:16.0483844Z ... ) 2025-03-17T18:45:16.0484096Z 2025-03-17T18:45:16.0484483Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0484862Z 2025-03-17T18:45:16.0485470Z msg = Cannot scrape callname=global_unstructured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=1023. 2025-03-17T18:45:16.0486417Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0486818Z 2025-03-17T18:45:16.0487258Z Globally prunes tensors corresponding to all parameters in ``parameters`` by applying the specified ``pruning_method``. 2025-03-17T18:45:16.0487831Z 2025-03-17T18:45:16.0487947Z Modifies modules in place by: 2025-03-17T18:45:16.0488159Z 2025-03-17T18:45:16.0488377Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:16.0488937Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:16.0489502Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:16.0490054Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:16.0490484Z ``name+'_orig'``. 2025-03-17T18:45:16.0490641Z 2025-03-17T18:45:16.0490739Z Args: 2025-03-17T18:45:16.0491165Z parameters (Iterable of (module, name) tuples): parameters of 2025-03-17T18:45:16.0491696Z the model to prune in a global fashion, i.e. by aggregating all 2025-03-17T18:45:16.0492230Z weights prior to deciding which ones to prune. module must be of 2025-03-17T18:45:16.0492729Z type :class:`nn.Module`, and name must be a string. 2025-03-17T18:45:16.0493239Z pruning_method (function): a valid pruning function from this module, 2025-03-17T18:45:16.0493776Z or a custom one implemented by the user that satisfies the 2025-03-17T18:45:16.0494319Z implementation guidelines and has ``PRUNING_TYPE='unstructured'``. 2025-03-17T18:45:16.0494924Z importance_scores (dict): a dictionary mapping (module, name) tuples to 2025-03-17T18:45:16.0495519Z the corresponding parameter's importance scores tensor. The tensor 2025-03-17T18:45:16.0496098Z should be the same shape as the parameter, and is used for computing 2025-03-17T18:45:16.0496534Z mask for pruning. 2025-03-17T18:45:16.0496937Z If unspecified or None, the parameter will be used in place of its 2025-03-17T18:45:16.0497382Z importance scores. 2025-03-17T18:45:16.0497701Z kwargs: other keyword arguments such as: 2025-03-17T18:45:16.0498151Z amount (int or float): quantity of parameters to prune across the 2025-03-17T18:45:16.0498596Z specified parameters. 2025-03-17T18:45:16.0498980Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:16.0499491Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:16.0500013Z absolute number of parameters to prune. 2025-03-17T18:45:16.0500276Z 2025-03-17T18:45:16.0500370Z Raises: 2025-03-17T18:45:16.0500665Z TypeError: if ``PRUNING_TYPE != 'unstructured'`` 2025-03-17T18:45:16.0500949Z 2025-03-17T18:45:16.0501039Z Note: 2025-03-17T18:45:16.0501399Z Since global structured pruning doesn't make much sense unless the 2025-03-17T18:45:16.0501955Z norm is normalized by the size of the parameter, we now limit the 2025-03-17T18:45:16.0502440Z scope of global pruning to unstructured methods. 2025-03-17T18:45:16.0502708Z 2025-03-17T18:45:16.0502813Z Examples: 2025-03-17T18:45:16.0503070Z >>> from torch.nn.utils import prune 2025-03-17T18:45:16.0503463Z >>> from collections import OrderedDict 2025-03-17T18:45:16.0503825Z >>> net = nn.Sequential(OrderedDict([ 2025-03-17T18:45:16.0504175Z ... ('first', nn.Linear(10, 4)), 2025-03-17T18:45:16.0504510Z ... ('second', nn.Linear(4, 1)), 2025-03-17T18:45:16.0504814Z ... ])) 2025-03-17T18:45:16.0505064Z >>> parameters_to_prune = ( 2025-03-17T18:45:16.0505377Z ... (net.first, 'weight'), 2025-03-17T18:45:16.0505695Z ... (net.second, 'weight'), 2025-03-17T18:45:16.0505997Z ... ) 2025-03-17T18:45:16.0506242Z >>> prune.global_unstructured( 2025-03-17T18:45:16.0506666Z ... parameters_to_prune, 2025-03-17T18:45:16.0507012Z ... pruning_method=prune.L1Unstructured, 2025-03-17T18:45:16.0507365Z ... amount=10, 2025-03-17T18:45:16.0507619Z ... ) 2025-03-17T18:45:16.0507976Z >>> print(sum(torch.nn.utils.parameters_to_vector(net.buffers()) == 0)) 2025-03-17T18:45:16.0508420Z tensor(10) 2025-03-17T18:45:16.0508557Z 2025-03-17T18:45:16.0508564Z 2025-03-17T18:45:16.0508835Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0509212Z 2025-03-17T18:45:16.0509753Z msg = Cannot scrape callname=custom_from_mask in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=1142. 2025-03-17T18:45:16.0510660Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.0511451Z Prune tensor corresponding to parameter called ``name`` in ``module`` by applying the pre-computed mask in ``mask``. 2025-03-17T18:45:16.0511968Z 2025-03-17T18:45:16.0512203Z Modifies module in place (and also return the modified module) by: 2025-03-17T18:45:16.0512601Z 2025-03-17T18:45:16.0512830Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:16.0513404Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:16.0513977Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:16.0514537Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:16.0514980Z ``name+'_orig'``. 2025-03-17T18:45:16.0515149Z 2025-03-17T18:45:16.0515251Z Args: 2025-03-17T18:45:16.0515567Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:16.0516067Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:16.0516465Z will act. 2025-03-17T18:45:16.0516813Z mask (Tensor): binary mask to be applied to the parameter. 2025-03-17T18:45:16.0517126Z 2025-03-17T18:45:16.0517219Z Returns: 2025-03-17T18:45:16.0517591Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:16.0517947Z 2025-03-17T18:45:16.0518040Z Examples: 2025-03-17T18:45:16.0518310Z >>> from torch.nn.utils import prune 2025-03-17T18:45:16.0518674Z >>> m = prune.custom_from_mask( 2025-03-17T18:45:16.0519079Z ... nn.Linear(5, 3), name='bias', mask=torch.tensor([0, 1, 0]) 2025-03-17T18:45:16.0519476Z ... ) 2025-03-17T18:45:16.0519727Z >>> print(m.bias_mask) 2025-03-17T18:45:16.0520026Z tensor([0., 1., 0.]) 2025-03-17T18:45:16.0520210Z 2025-03-17T18:45:16.0520310Z 2025-03-17T18:45:16.0520728Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.0521106Z 2025-03-17T18:45:16.1619821Z msg = Cannot scrape callname=AveragedModel in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py line=117. 2025-03-17T18:45:16.1620771Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.1621562Z Implements averaged model for Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA). 2025-03-17T18:45:16.1622066Z 2025-03-17T18:45:16.1622319Z Stochastic Weight Averaging was proposed in `Averaging Weights Leads to 2025-03-17T18:45:16.1622927Z Wider Optima and Better Generalization`_ by Pavel Izmailov, Dmitrii 2025-03-17T18:45:16.1623672Z Podoprikhin, Timur Garipov, Dmitry Vetrov and Andrew Gordon Wilson 2025-03-17T18:45:16.1624113Z (UAI 2018). 2025-03-17T18:45:16.1624269Z 2025-03-17T18:45:16.1624495Z Exponential Moving Average is a variation of `Polyak averaging`_, 2025-03-17T18:45:16.1625100Z but using exponential weights instead of equal weights across iterations. 2025-03-17T18:45:16.1625469Z 2025-03-17T18:45:16.1625729Z AveragedModel class creates a copy of the provided module :attr:`model` 2025-03-17T18:45:16.1626328Z on the device :attr:`device` and allows to compute running averages of the 2025-03-17T18:45:16.1626891Z parameters of the :attr:`model`. 2025-03-17T18:45:16.1627108Z 2025-03-17T18:45:16.1627213Z Args: 2025-03-17T18:45:16.1627507Z model (torch.nn.Module): model to use with SWA/EMA 2025-03-17T18:45:16.1628034Z device (torch.device, optional): if provided, the averaged model will be 2025-03-17T18:45:16.1628706Z stored on the :attr:`device` 2025-03-17T18:45:16.1629219Z avg_fn (function, optional): the averaging function used to update 2025-03-17T18:45:16.1629781Z parameters; the function must take in the current value of the 2025-03-17T18:45:16.1630394Z :class:`AveragedModel` parameter, the current value of :attr:`model` 2025-03-17T18:45:16.1630959Z parameter, and the number of models already averaged; if None, 2025-03-17T18:45:16.1631462Z an equally weighted average is used (default: None) 2025-03-17T18:45:16.1631988Z multi_avg_fn (function, optional): the averaging function used to update 2025-03-17T18:45:16.1632718Z parameters inplace; the function must take in the current values of the 2025-03-17T18:45:16.1633376Z :class:`AveragedModel` parameters as a list, the current values of :attr:`model` 2025-03-17T18:45:16.1634019Z parameters as a list, and the number of models already averaged; if None, 2025-03-17T18:45:16.1634555Z an equally weighted average is used (default: None) 2025-03-17T18:45:16.1635054Z use_buffers (bool): if ``True``, it will compute running averages for 2025-03-17T18:45:16.1635622Z both the parameters and the buffers of the model. (default: ``False``) 2025-03-17T18:45:16.1635986Z 2025-03-17T18:45:16.1636080Z Example: 2025-03-17T18:45:16.1636358Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:16.1636746Z >>> loader, optimizer, model, loss_fn = ... 2025-03-17T18:45:16.1637353Z >>> swa_model = torch.optim.swa_utils.AveragedModel(model) 2025-03-17T18:45:16.1637900Z >>> scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, 2025-03-17T18:45:16.1638385Z >>> T_max=300) 2025-03-17T18:45:16.1638737Z >>> swa_start = 160 2025-03-17T18:45:16.1639077Z >>> swa_scheduler = SWALR(optimizer, swa_lr=0.05) 2025-03-17T18:45:16.1639458Z >>> for i in range(300): 2025-03-17T18:45:16.1639781Z >>> for input, target in loader: 2025-03-17T18:45:16.1640136Z >>> optimizer.zero_grad() 2025-03-17T18:45:16.1640508Z >>> loss_fn(model(input), target).backward() 2025-03-17T18:45:16.1640883Z >>> optimizer.step() 2025-03-17T18:45:16.1641275Z >>> if i > swa_start: 2025-03-17T18:45:16.1641624Z >>> swa_model.update_parameters(model) 2025-03-17T18:45:16.1641999Z >>> swa_scheduler.step() 2025-03-17T18:45:16.1642310Z >>> else: 2025-03-17T18:45:16.1642585Z >>> scheduler.step() 2025-03-17T18:45:16.1642895Z >>> 2025-03-17T18:45:16.1643197Z >>> # Update bn statistics for the swa_model at the end 2025-03-17T18:45:16.1643691Z >>> torch.optim.swa_utils.update_bn(loader, swa_model) 2025-03-17T18:45:16.1643973Z 2025-03-17T18:45:16.1644296Z You can also use custom averaging functions with the `avg_fn` or `multi_avg_fn` parameters. 2025-03-17T18:45:16.1644985Z If no averaging function is provided, the default is to compute 2025-03-17T18:45:16.1645448Z equally-weighted average of the weights (SWA). 2025-03-17T18:45:16.1645722Z 2025-03-17T18:45:16.1645815Z Example: 2025-03-17T18:45:16.1646096Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:16.1646567Z >>> # Compute exponential moving averages of the weights and buffers 2025-03-17T18:45:16.1647083Z >>> ema_model = torch.optim.swa_utils.AveragedModel(model, 2025-03-17T18:45:16.1647601Z >>> torch.optim.swa_utils.get_ema_multi_avg_fn(0.9), use_buffers=True) 2025-03-17T18:45:16.1647957Z 2025-03-17T18:45:16.1648070Z .. note:: 2025-03-17T18:45:16.1648440Z When using SWA/EMA with models containing Batch Normalization you may 2025-03-17T18:45:16.1649006Z need to update the activation statistics for Batch Normalization. 2025-03-17T18:45:16.1649586Z This can be done either by using the :meth:`torch.optim.swa_utils.update_bn` 2025-03-17T18:45:16.1650193Z or by setting :attr:`use_buffers` to `True`. The first approach updates the 2025-03-17T18:45:16.1650801Z statistics in a post-training step by passing data through the model. The 2025-03-17T18:45:16.1651422Z second does it during the parameter update phase by averaging all buffers. 2025-03-17T18:45:16.1652050Z Empirical evidence has shown that updating the statistics in normalization 2025-03-17T18:45:16.1652672Z layers increases accuracy, but you may wish to empirically test which 2025-03-17T18:45:16.1653199Z approach yields the best results in your problem. 2025-03-17T18:45:16.1653485Z 2025-03-17T18:45:16.1653657Z .. note:: 2025-03-17T18:45:16.1654067Z :attr:`avg_fn` and `multi_avg_fn` are not saved in the :meth:`state_dict` of the model. 2025-03-17T18:45:16.1654447Z 2025-03-17T18:45:16.1654550Z .. note:: 2025-03-17T18:45:16.1654899Z When :meth:`update_parameters` is called for the first time (i.e. 2025-03-17T18:45:16.1655428Z :attr:`n_averaged` is `0`) the parameters of `model` are copied 2025-03-17T18:45:16.1655959Z to the parameters of :class:`AveragedModel`. For every subsequent 2025-03-17T18:45:16.1656491Z call of :meth:`update_parameters` the function `avg_fn` is used 2025-03-17T18:45:16.1656928Z to update the parameters. 2025-03-17T18:45:16.1657134Z 2025-03-17T18:45:16.1657379Z .. _Averaging Weights Leads to Wider Optima and Better Generalization: 2025-03-17T18:45:16.1657876Z https://arxiv.org/abs/1803.05407 2025-03-17T18:45:16.1658359Z .. _There Are Many Consistent Explanations of Unlabeled Data: Why You Should 2025-03-17T18:45:16.1658839Z Average: 2025-03-17T18:45:16.1659114Z https://arxiv.org/abs/1806.05594 2025-03-17T18:45:16.1659560Z .. _SWALP: Stochastic Weight Averaging in Low-Precision Training: 2025-03-17T18:45:16.1660003Z https://arxiv.org/abs/1904.11943 2025-03-17T18:45:16.1660477Z .. _Stochastic Weight Averaging in Parallel: Large-Batch Training That 2025-03-17T18:45:16.1660944Z Generalizes Well: 2025-03-17T18:45:16.1661254Z https://arxiv.org/abs/2001.02312 2025-03-17T18:45:16.1661598Z .. _Polyak averaging: 2025-03-17T18:45:16.1661961Z https://paperswithcode.com/method/polyak-averaging 2025-03-17T18:45:16.1662379Z 2025-03-17T18:45:16.1662770Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.1663161Z 2025-03-17T18:45:16.1663649Z msg = Cannot scrape callname=SWALR in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py line=369. 2025-03-17T18:45:16.1664536Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.1665154Z Anneals the learning rate in each parameter group to a fixed value. 2025-03-17T18:45:16.1665503Z 2025-03-17T18:45:16.1665745Z This learning rate scheduler is meant to be used with Stochastic Weight 2025-03-17T18:45:16.1666360Z Averaging (SWA) method (see `torch.optim.swa_utils.AveragedModel`). 2025-03-17T18:45:16.1666786Z 2025-03-17T18:45:16.1666877Z Args: 2025-03-17T18:45:16.1667197Z optimizer (torch.optim.Optimizer): wrapped optimizer 2025-03-17T18:45:16.1667710Z swa_lrs (float or list): the learning rate value for all param groups 2025-03-17T18:45:16.1668190Z together or separately for each group. 2025-03-17T18:45:16.1668659Z annealing_epochs (int): number of epochs in the annealing phase 2025-03-17T18:45:16.1669094Z (default: 10) 2025-03-17T18:45:16.1669496Z annealing_strategy (str): "cos" or "linear"; specifies the annealing 2025-03-17T18:45:16.1670066Z strategy: "cos" for cosine annealing, "linear" for linear annealing 2025-03-17T18:45:16.1670507Z (default: "cos") 2025-03-17T18:45:16.1670887Z last_epoch (int): the index of the last epoch (default: -1) 2025-03-17T18:45:16.1671190Z 2025-03-17T18:45:16.1671390Z The :class:`SWALR` scheduler can be used together with other 2025-03-17T18:45:16.1671922Z schedulers to switch to a constant learning rate late in the training 2025-03-17T18:45:16.1672377Z as in the example below. 2025-03-17T18:45:16.1672568Z 2025-03-17T18:45:16.1672658Z Example: 2025-03-17T18:45:16.1672942Z >>> # xdoctest: +SKIP("Undefined variables") 2025-03-17T18:45:16.1673320Z >>> loader, optimizer, model = ... 2025-03-17T18:45:16.1673675Z >>> lr_lambda = lambda epoch: 0.9 2025-03-17T18:45:16.1674135Z >>> scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 2025-03-17T18:45:16.1674597Z >>> lr_lambda=lr_lambda) 2025-03-17T18:45:16.1675084Z >>> swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, 2025-03-17T18:45:16.1675572Z >>> anneal_strategy="linear", anneal_epochs=20, swa_lr=0.05) 2025-03-17T18:45:16.1675983Z >>> swa_start = 160 2025-03-17T18:45:16.1676277Z >>> for i in range(300): 2025-03-17T18:45:16.1676601Z >>> for input, target in loader: 2025-03-17T18:45:16.1676955Z >>> optimizer.zero_grad() 2025-03-17T18:45:16.1677326Z >>> loss_fn(model(input), target).backward() 2025-03-17T18:45:16.1677699Z >>> optimizer.step() 2025-03-17T18:45:16.1678024Z >>> if i > swa_start: 2025-03-17T18:45:16.1678347Z >>> swa_scheduler.step() 2025-03-17T18:45:16.1678672Z >>> else: 2025-03-17T18:45:16.1678947Z >>> scheduler.step() 2025-03-17T18:45:16.1679157Z 2025-03-17T18:45:16.1679399Z .. _Averaging Weights Leads to Wider Optima and Better Generalization: 2025-03-17T18:45:16.1679885Z https://arxiv.org/abs/1803.05407 2025-03-17T18:45:16.1680196Z 2025-03-17T18:45:16.1680579Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.1680970Z 2025-03-17T18:45:16.6617639Z msg = Cannot scrape callname=assert_close in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_comparison.py line=1263. 2025-03-17T18:45:16.6618855Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:16.6619414Z Asserts that ``actual`` and ``expected`` are close. 2025-03-17T18:45:16.6619703Z 2025-03-17T18:45:16.6620267Z If ``actual`` and ``expected`` are strided, non-quantized, real-valued, and finite, they are considered close if 2025-03-17T18:45:16.6620778Z 2025-03-17T18:45:16.6620892Z .. math:: 2025-03-17T18:45:16.6621043Z 2025-03-17T18:45:16.6621428Z \lvert \text{actual} - \text{expected} \rvert \le \texttt{atol} + \texttt{rtol} \cdot \lvert \text{expected} \rvert 2025-03-17T18:45:16.6621943Z 2025-03-17T18:45:16.6622298Z Non-finite values (``-inf`` and ``inf``) are only considered close if and only if they are equal. ``NaN``'s are 2025-03-17T18:45:16.6622990Z only considered equal to each other if ``equal_nan`` is ``True``. 2025-03-17T18:45:16.6623394Z 2025-03-17T18:45:16.6623600Z In addition, they are only considered close if they have the same 2025-03-17T18:45:16.6623939Z 2025-03-17T18:45:16.6624141Z - :attr:`~torch.Tensor.device` (if ``check_device`` is ``True``), 2025-03-17T18:45:16.6624609Z - ``dtype`` (if ``check_dtype`` is ``True``), 2025-03-17T18:45:16.6625015Z - ``layout`` (if ``check_layout`` is ``True``), and 2025-03-17T18:45:16.6625413Z - stride (if ``check_stride`` is ``True``). 2025-03-17T18:45:16.6625668Z 2025-03-17T18:45:16.6625972Z If either ``actual`` or ``expected`` is a meta tensor, only the attribute checks will be performed. 2025-03-17T18:45:16.6626410Z 2025-03-17T18:45:16.6626911Z If ``actual`` and ``expected`` are sparse (either having COO, CSR, CSC, BSR, or BSC layout), their strided members are 2025-03-17T18:45:16.6627791Z checked individually. Indices, namely ``indices`` for COO, ``crow_indices`` and ``col_indices`` for CSR and BSR, 2025-03-17T18:45:16.6628535Z or ``ccol_indices`` and ``row_indices`` for CSC and BSC layouts, respectively, 2025-03-17T18:45:16.6629293Z are always checked for equality whereas the values are checked for closeness according to the definition above. 2025-03-17T18:45:16.6629810Z 2025-03-17T18:45:16.6630110Z If ``actual`` and ``expected`` are quantized, they are considered close if they have the same 2025-03-17T18:45:16.6630883Z :meth:`~torch.Tensor.qscheme` and the result of :meth:`~torch.Tensor.dequantize` is close according to the 2025-03-17T18:45:16.6631473Z definition above. 2025-03-17T18:45:16.6631639Z 2025-03-17T18:45:16.6631962Z ``actual`` and ``expected`` can be :class:`~torch.Tensor`'s or any tensor-or-scalar-likes from which 2025-03-17T18:45:16.6632884Z :class:`torch.Tensor`'s can be constructed with :func:`torch.as_tensor`. Except for Python scalars the input types 2025-03-17T18:45:16.6633768Z have to be directly related. In addition, ``actual`` and ``expected`` can be :class:`~collections.abc.Sequence`'s 2025-03-17T18:45:16.6634653Z or :class:`~collections.abc.Mapping`'s in which case they are considered close if their structure matches and all 2025-03-17T18:45:16.6635403Z their elements are considered close according to the above definition. 2025-03-17T18:45:16.6635759Z 2025-03-17T18:45:16.6635870Z .. note:: 2025-03-17T18:45:16.6636009Z 2025-03-17T18:45:16.6636363Z Python scalars are an exception to the type relation requirement, because their :func:`type`, i.e. 2025-03-17T18:45:16.6637386Z :class:`int`, :class:`float`, and :class:`complex`, is equivalent to the ``dtype`` of a tensor-like. Thus, 2025-03-17T18:45:16.6638125Z Python scalars of different types can be checked, but require ``check_dtype=False``. 2025-03-17T18:45:16.6638533Z 2025-03-17T18:45:16.6638636Z Args: 2025-03-17T18:45:16.6638870Z actual (Any): Actual input. 2025-03-17T18:45:16.6639208Z expected (Any): Expected input. 2025-03-17T18:45:16.6639815Z allow_subclasses (bool): If ``True`` (default) and except for Python scalars, inputs of directly related types 2025-03-17T18:45:16.6640475Z are allowed. Otherwise type equality is required. 2025-03-17T18:45:16.6641134Z rtol (Optional[float]): Relative tolerance. If specified ``atol`` must also be specified. If omitted, default 2025-03-17T18:45:16.6641966Z values based on the :attr:`~torch.Tensor.dtype` are selected with the below table. 2025-03-17T18:45:16.6642733Z atol (Optional[float]): Absolute tolerance. If specified ``rtol`` must also be specified. If omitted, default 2025-03-17T18:45:16.6643498Z values based on the :attr:`~torch.Tensor.dtype` are selected with the below table. 2025-03-17T18:45:16.6644154Z equal_nan (Union[bool, str]): If ``True``, two ``NaN`` values will be considered equal. 2025-03-17T18:45:16.6644832Z check_device (bool): If ``True`` (default), asserts that corresponding tensors are on the same 2025-03-17T18:45:16.6645510Z :attr:`~torch.Tensor.device`. If this check is disabled, tensors on different 2025-03-17T18:45:16.6646192Z :attr:`~torch.Tensor.device`'s are moved to the CPU before being compared. 2025-03-17T18:45:16.6646928Z check_dtype (bool): If ``True`` (default), asserts that corresponding tensors have the same ``dtype``. If this 2025-03-17T18:45:16.6647766Z check is disabled, tensors with different ``dtype``'s are promoted to a common ``dtype`` (according to 2025-03-17T18:45:16.6648420Z :func:`torch.promote_types`) before being compared. 2025-03-17T18:45:16.6649078Z check_layout (bool): If ``True`` (default), asserts that corresponding tensors have the same ``layout``. If this 2025-03-17T18:45:16.6649913Z check is disabled, tensors with different ``layout``'s are converted to strided tensors before being 2025-03-17T18:45:16.6650480Z compared. 2025-03-17T18:45:16.6651015Z check_stride (bool): If ``True`` and corresponding tensors are strided, asserts that they have the same stride. 2025-03-17T18:45:16.6651874Z msg (Optional[Union[str, Callable[[str], str]]]): Optional error message to use in case a failure occurs during 2025-03-17T18:45:16.6652725Z the comparison. Can also passed as callable in which case it will be called with the generated message and 2025-03-17T18:45:16.6653344Z should return the new message. 2025-03-17T18:45:16.6653584Z 2025-03-17T18:45:16.6653675Z Raises: 2025-03-17T18:45:16.6654050Z ValueError: If no :class:`torch.Tensor` can be constructed from an input. 2025-03-17T18:45:16.6654585Z ValueError: If only ``rtol`` or ``atol`` is specified. 2025-03-17T18:45:16.6655295Z AssertionError: If corresponding inputs are not Python scalars and are not directly related. 2025-03-17T18:45:16.6656127Z AssertionError: If ``allow_subclasses`` is ``False``, but corresponding inputs are not Python scalars and have 2025-03-17T18:45:16.6656733Z different types. 2025-03-17T18:45:16.6657303Z AssertionError: If the inputs are :class:`~collections.abc.Sequence`'s, but their length does not match. 2025-03-17T18:45:16.6658174Z AssertionError: If the inputs are :class:`~collections.abc.Mapping`'s, but their set of keys do not match. 2025-03-17T18:45:16.6658999Z AssertionError: If corresponding tensors do not have the same :attr:`~torch.Tensor.shape`. 2025-03-17T18:45:16.6659758Z AssertionError: If ``check_layout`` is ``True``, but corresponding tensors do not have the same 2025-03-17T18:45:16.6660309Z :attr:`~torch.Tensor.layout`. 2025-03-17T18:45:16.6660771Z AssertionError: If only one of corresponding tensors is quantized. 2025-03-17T18:45:16.6661523Z AssertionError: If corresponding tensors are quantized, but have different :meth:`~torch.Tensor.qscheme`'s. 2025-03-17T18:45:16.6662343Z AssertionError: If ``check_device`` is ``True``, but corresponding tensors are not on the same 2025-03-17T18:45:16.6662889Z :attr:`~torch.Tensor.device`. 2025-03-17T18:45:16.6663467Z AssertionError: If ``check_dtype`` is ``True``, but corresponding tensors do not have the same ``dtype``. 2025-03-17T18:45:16.6664309Z AssertionError: If ``check_stride`` is ``True``, but corresponding strided tensors do not have the same stride. 2025-03-17T18:45:16.6665212Z AssertionError: If the values of corresponding tensors are not close according to the definition above. 2025-03-17T18:45:16.6665704Z 2025-03-17T18:45:16.6666090Z The following table displays the default ``rtol`` and ``atol`` for different ``dtype``'s. In case of mismatching 2025-03-17T18:45:16.6666849Z ``dtype``'s, the maximum of both tolerances is used. 2025-03-17T18:45:16.6667127Z 2025-03-17T18:45:16.6667280Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6667673Z | ``dtype`` | ``rtol`` | ``atol`` | 2025-03-17T18:45:16.6668047Z +===========================+============+==========+ 2025-03-17T18:45:16.6668467Z | :attr:`~torch.float16` | ``1e-3`` | ``1e-5`` | 2025-03-17T18:45:16.6668848Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6669242Z | :attr:`~torch.bfloat16` | ``1.6e-2`` | ``1e-5`` | 2025-03-17T18:45:16.6669639Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6670031Z | :attr:`~torch.float32` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6670421Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6670811Z | :attr:`~torch.float64` | ``1e-7`` | ``1e-7`` | 2025-03-17T18:45:16.6671199Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6671596Z | :attr:`~torch.complex32` | ``1e-3`` | ``1e-5`` | 2025-03-17T18:45:16.6671989Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6672382Z | :attr:`~torch.complex64` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6672775Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6673171Z | :attr:`~torch.complex128` | ``1e-7`` | ``1e-7`` | 2025-03-17T18:45:16.6673565Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6673954Z | :attr:`~torch.quint8` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6674344Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6674738Z | :attr:`~torch.quint2x4` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6675126Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6675515Z | :attr:`~torch.quint4x2` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6675902Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6676684Z | :attr:`~torch.qint8` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6677079Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6677468Z | :attr:`~torch.qint32` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:16.6677843Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6678224Z | other | ``0.0`` | ``0.0`` | 2025-03-17T18:45:16.6678596Z +---------------------------+------------+----------+ 2025-03-17T18:45:16.6678848Z 2025-03-17T18:45:16.6678948Z .. note:: 2025-03-17T18:45:16.6679095Z 2025-03-17T18:45:16.6679495Z :func:`~torch.testing.assert_close` is highly configurable with strict default settings. Users are encouraged 2025-03-17T18:45:16.6680380Z to :func:`~functools.partial` it to fit their use case. For example, if an equality check is needed, one might 2025-03-17T18:45:16.6681132Z define an ``assert_equal`` that uses zero tolerances for every ``dtype`` by default: 2025-03-17T18:45:16.6681536Z 2025-03-17T18:45:16.6681642Z >>> import functools 2025-03-17T18:45:16.6682096Z >>> assert_equal = functools.partial(torch.testing.assert_close, rtol=0, atol=0) 2025-03-17T18:45:16.6682620Z >>> assert_equal(1e-9, 1e-10) 2025-03-17T18:45:16.6682970Z Traceback (most recent call last): 2025-03-17T18:45:16.6683307Z ... 2025-03-17T18:45:16.6683576Z AssertionError: Scalars are not equal! 2025-03-17T18:45:16.6683928Z 2025-03-17T18:45:16.6684202Z Expected 1e-10 but got 1e-09. 2025-03-17T18:45:16.6684561Z Absolute difference: 9.000000000000001e-10 2025-03-17T18:45:16.6684969Z Relative difference: 9.0 2025-03-17T18:45:16.6685172Z 2025-03-17T18:45:16.6685281Z Examples: 2025-03-17T18:45:16.6685552Z >>> # tensor to tensor comparison 2025-03-17T18:45:16.6685913Z >>> expected = torch.tensor([1e0, 1e-1, 1e-2]) 2025-03-17T18:45:16.6686314Z >>> actual = torch.acos(torch.cos(expected)) 2025-03-17T18:45:16.6686733Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6687013Z 2025-03-17T18:45:16.6687131Z >>> # scalar to scalar comparison 2025-03-17T18:45:16.6687466Z >>> import math 2025-03-17T18:45:16.6687748Z >>> expected = math.sqrt(2.0) 2025-03-17T18:45:16.6688134Z >>> actual = 2.0 / math.sqrt(2.0) 2025-03-17T18:45:16.6688516Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6688793Z 2025-03-17T18:45:16.6688927Z >>> # numpy array to numpy array comparison 2025-03-17T18:45:16.6689291Z >>> import numpy as np 2025-03-17T18:45:16.6689625Z >>> expected = np.array([1e0, 1e-1, 1e-2]) 2025-03-17T18:45:16.6689999Z >>> actual = np.arccos(np.cos(expected)) 2025-03-17T18:45:16.6690395Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6690657Z 2025-03-17T18:45:16.6690792Z >>> # sequence to sequence comparison 2025-03-17T18:45:16.6691144Z >>> import numpy as np 2025-03-17T18:45:16.6691601Z >>> # The types of the sequences do not have to match. They only have to have the same 2025-03-17T18:45:16.6692124Z >>> # length and their elements have to match. 2025-03-17T18:45:16.6692541Z >>> expected = [torch.tensor([1.0]), 2.0, np.array(3.0)] 2025-03-17T18:45:16.6692935Z >>> actual = tuple(expected) 2025-03-17T18:45:16.6693304Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6693568Z 2025-03-17T18:45:16.6693705Z >>> # mapping to mapping comparison 2025-03-17T18:45:16.6694075Z >>> from collections import OrderedDict 2025-03-17T18:45:16.6694423Z >>> import numpy as np 2025-03-17T18:45:16.6694732Z >>> foo = torch.tensor(1.0) 2025-03-17T18:45:16.6695041Z >>> bar = 2.0 2025-03-17T18:45:16.6695310Z >>> baz = np.array(3.0) 2025-03-17T18:45:16.6695766Z >>> # The types and a possible ordering of mappings do not have to match. They only 2025-03-17T18:45:16.6696423Z >>> # have to have the same set of keys and their elements have to match. 2025-03-17T18:45:16.6696973Z >>> expected = OrderedDict([("foo", foo), ("bar", bar), ("baz", baz)]) 2025-03-17T18:45:16.6697450Z >>> actual = {"baz": baz, "bar": bar, "foo": foo} 2025-03-17T18:45:16.6697866Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6698143Z 2025-03-17T18:45:16.6698277Z >>> expected = torch.tensor([1.0, 2.0, 3.0]) 2025-03-17T18:45:16.6698647Z >>> actual = expected.clone() 2025-03-17T18:45:16.6699050Z >>> # By default, directly related instances can be compared 2025-03-17T18:45:16.6699579Z >>> torch.testing.assert_close(torch.nn.Parameter(actual), expected) 2025-03-17T18:45:16.6700133Z >>> # This check can be made more strict with allow_subclasses=False 2025-03-17T18:45:16.6700583Z >>> torch.testing.assert_close( 2025-03-17T18:45:16.6701027Z ... torch.nn.Parameter(actual), expected, allow_subclasses=False 2025-03-17T18:45:16.6701450Z ... ) 2025-03-17T18:45:16.6701712Z Traceback (most recent call last): 2025-03-17T18:45:16.6702040Z ... 2025-03-17T18:45:16.6702384Z TypeError: No comparison pair was able to handle inputs of type 2025-03-17T18:45:16.6702948Z and . 2025-03-17T18:45:16.6703525Z >>> # If the inputs are not directly related, they are never considered close 2025-03-17T18:45:16.6704050Z >>> torch.testing.assert_close(actual.numpy(), expected) 2025-03-17T18:45:16.6704472Z Traceback (most recent call last): 2025-03-17T18:45:16.6704832Z ... 2025-03-17T18:45:16.6705263Z TypeError: No comparison pair was able to handle inputs of type 2025-03-17T18:45:16.6705800Z and . 2025-03-17T18:45:16.6706291Z >>> # Exceptions to these rules are Python scalars. They can be checked regardless of 2025-03-17T18:45:16.6706910Z >>> # their type if check_dtype=False. 2025-03-17T18:45:16.6707335Z >>> torch.testing.assert_close(1.0, 1, check_dtype=False) 2025-03-17T18:45:16.6707640Z 2025-03-17T18:45:16.6707746Z >>> # NaN != NaN by default. 2025-03-17T18:45:16.6708093Z >>> expected = torch.tensor(float("Nan")) 2025-03-17T18:45:16.6708493Z >>> actual = expected.clone() 2025-03-17T18:45:16.6708866Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:16.6709257Z Traceback (most recent call last): 2025-03-17T18:45:16.6709583Z ... 2025-03-17T18:45:16.6709852Z AssertionError: Scalars are not close! 2025-03-17T18:45:16.6710195Z 2025-03-17T18:45:16.6710464Z Expected nan but got nan. 2025-03-17T18:45:16.6710824Z Absolute difference: nan (up to 1e-05 allowed) 2025-03-17T18:45:16.6711244Z Relative difference: nan (up to 1.3e-06 allowed) 2025-03-17T18:45:16.6711722Z >>> torch.testing.assert_close(actual, expected, equal_nan=True) 2025-03-17T18:45:16.6712052Z 2025-03-17T18:45:16.6712198Z >>> expected = torch.tensor([1.0, 2.0, 3.0]) 2025-03-17T18:45:16.6712561Z >>> actual = torch.tensor([1.0, 4.0, 5.0]) 2025-03-17T18:45:16.6712959Z >>> # The default error message can be overwritten. 2025-03-17T18:45:16.6713528Z >>> torch.testing.assert_close(actual, expected, msg="Argh, the tensors are not close!") 2025-03-17T18:45:16.6714073Z Traceback (most recent call last): 2025-03-17T18:45:16.6714401Z ... 2025-03-17T18:45:16.6714691Z AssertionError: Argh, the tensors are not close! 2025-03-17T18:45:16.6715201Z >>> # If msg is a callable, it can be used to augment the generated message with 2025-03-17T18:45:16.6715666Z >>> # extra information 2025-03-17T18:45:16.6715983Z >>> torch.testing.assert_close( 2025-03-17T18:45:16.6716425Z ... actual, expected, msg=lambda msg: f"Header\n\n{msg}\n\nFooter" 2025-03-17T18:45:16.6716910Z ... ) 2025-03-17T18:45:16.6717175Z Traceback (most recent call last): 2025-03-17T18:45:16.6717504Z ... 2025-03-17T18:45:16.6717756Z AssertionError: Header 2025-03-17T18:45:16.6718060Z 2025-03-17T18:45:16.6718335Z Tensor-likes are not close! 2025-03-17T18:45:16.6718659Z 2025-03-17T18:45:16.6718939Z Mismatched elements: 2 / 3 (66.7%) 2025-03-17T18:45:16.6719398Z Greatest absolute difference: 2.0 at index (1,) (up to 1e-05 allowed) 2025-03-17T18:45:16.6720001Z Greatest relative difference: 1.0 at index (1,) (up to 1.3e-06 allowed) 2025-03-17T18:45:16.6720467Z 2025-03-17T18:45:16.6720723Z Footer 2025-03-17T18:45:16.6720962Z 2025-03-17T18:45:16.6721352Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:16.6721743Z 2025-03-17T18:45:17.7203539Z msg = Cannot scrape callname=register_pytree_node in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py line=104. 2025-03-17T18:45:17.7204561Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.7205114Z Register a container-like type as pytree node. 2025-03-17T18:45:17.7205394Z 2025-03-17T18:45:17.7205495Z Args: 2025-03-17T18:45:17.7205837Z cls (type): A Python type to treat as an internal pytree node. 2025-03-17T18:45:17.7206442Z flatten_fn (callable): A function to be used during flattening, taking an instance of 2025-03-17T18:45:17.7207110Z ``cls`` and returning a pair, with (1) an iterable for the children to be flattened 2025-03-17T18:45:17.7208003Z recursively, and (2) some hashable auxiliary data to be stored in the treespec and to be 2025-03-17T18:45:17.7208565Z passed to the ``unflatten_fn``. 2025-03-17T18:45:17.7209101Z unflatten_fn (callable): A function taking two arguments: the auxiliary data that was 2025-03-17T18:45:17.7209796Z returned by ``flatten_fn`` and stored in the treespec, and the unflattened children. 2025-03-17T18:45:17.7210361Z The function should return an instance of ``cls``. 2025-03-17T18:45:17.7210917Z serialized_type_name (str, optional): A keyword argument used to specify the fully 2025-03-17T18:45:17.7211578Z qualified name used when serializing the tree spec. 2025-03-17T18:45:17.7212190Z to_dumpable_context (callable, optional): An optional keyword argument to custom specify how 2025-03-17T18:45:17.7212924Z to convert the context of the pytree to a custom json dumpable representation. This is 2025-03-17T18:45:17.7213626Z used for json serialization, which is being used in :mod:`torch.export` right now. 2025-03-17T18:45:17.7214347Z from_dumpable_context (callable, optional): An optional keyword argument to custom specify 2025-03-17T18:45:17.7215059Z how to convert the custom json dumpable representation of the context back to the 2025-03-17T18:45:17.7215734Z original context. This is used for json deserialization, which is being used in 2025-03-17T18:45:17.7216258Z :mod:`torch.export` right now. 2025-03-17T18:45:17.7216488Z 2025-03-17T18:45:17.7216623Z Example:: 2025-03-17T18:45:17.7216766Z 2025-03-17T18:45:17.7216890Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.7217239Z >>> # Registry a Python type with lambda functions 2025-03-17T18:45:17.7217624Z >>> register_pytree_node( 2025-03-17T18:45:17.7217932Z ... set, 2025-03-17T18:45:17.7218224Z ... lambda s: (sorted(s), None, None), 2025-03-17T18:45:17.7218612Z ... lambda children, _: set(children), 2025-03-17T18:45:17.7218956Z ... ) 2025-03-17T18:45:17.7219188Z 2025-03-17T18:45:17.7219570Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.7219963Z 2025-03-17T18:45:17.7718054Z msg = Cannot scrape callname=SelectiveCheckpointContext in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py line=1200. 2025-03-17T18:45:17.7719114Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.7719523Z 2025-03-17T18:45:17.7719753Z Context passed to policy function during selective checkpointing. 2025-03-17T18:45:17.7720128Z 2025-03-17T18:45:17.7720382Z This class is used to pass relevant metadata to the policy function during 2025-03-17T18:45:17.7721027Z selective checkpointing. The metadata includes whether the current invocation 2025-03-17T18:45:17.7721608Z of the policy function is during recomputation or not. 2025-03-17T18:45:17.7721906Z 2025-03-17T18:45:17.7722019Z Example: 2025-03-17T18:45:17.7722249Z >>> # xdoctest: +SKIP(stub) 2025-03-17T18:45:17.7722543Z >>> 2025-03-17T18:45:17.7722864Z >>> def policy_fn(ctx, op, *args, **kwargs): 2025-03-17T18:45:17.7723333Z >>> print(ctx.is_recompute) 2025-03-17T18:45:17.7723679Z >>> 2025-03-17T18:45:17.7724203Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, policy_fn) 2025-03-17T18:45:17.7724779Z >>> 2025-03-17T18:45:17.7725057Z >>> out = torch.utils.checkpoint.checkpoint( 2025-03-17T18:45:17.7725419Z >>> fn, x, y, 2025-03-17T18:45:17.7725692Z >>> use_reentrant=False, 2025-03-17T18:45:17.7726010Z >>> context_fn=context_fn, 2025-03-17T18:45:17.7726321Z >>> ) 2025-03-17T18:45:17.7726448Z 2025-03-17T18:45:17.7726727Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.7727107Z 2025-03-17T18:45:17.7727778Z msg = Cannot scrape callname=create_selective_checkpoint_contexts in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py line=1334. 2025-03-17T18:45:17.7728902Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.7729292Z 2025-03-17T18:45:17.7729551Z Helper to avoid recomputing certain ops during activation checkpointing. 2025-03-17T18:45:17.7729922Z 2025-03-17T18:45:17.7730158Z Use this with `torch.utils.checkpoint.checkpoint` to control which 2025-03-17T18:45:17.7730667Z operations are recomputed during the backward pass. 2025-03-17T18:45:17.7730978Z 2025-03-17T18:45:17.7731067Z Args: 2025-03-17T18:45:17.7731381Z policy_fn_or_list (Callable or List): 2025-03-17T18:45:17.7731808Z - If a policy function is provided, it should accept a 2025-03-17T18:45:17.7732360Z :class:`SelectiveCheckpointContext`, the :class:`OpOverload`, args and 2025-03-17T18:45:17.7732958Z kwargs to the op, and return a :class:`CheckpointPolicy` enum value 2025-03-17T18:45:17.7733556Z indicating whether the execution of the op should be recomputed or not. 2025-03-17T18:45:17.7734152Z - If a list of operations is provided, it is equivalent to a policy 2025-03-17T18:45:17.7734688Z returning `CheckpointPolicy.MUST_SAVE` for the specified 2025-03-17T18:45:17.7735242Z operations and `CheckpointPolicy.PREFER_RECOMPUTE` for all other 2025-03-17T18:45:17.7735703Z operations. 2025-03-17T18:45:17.7736097Z allow_cache_entry_mutation (bool, optional): By default, an error is 2025-03-17T18:45:17.7736675Z raised if any tensors cached by selective activation checkpoint are 2025-03-17T18:45:17.7737451Z mutated in order to ensure correctness. If set to `True`, this check 2025-03-17T18:45:17.7737901Z is disabled. 2025-03-17T18:45:17.7738165Z Returns: 2025-03-17T18:45:17.7738409Z A tuple of two context managers. 2025-03-17T18:45:17.7738640Z 2025-03-17T18:45:17.7738731Z Example: 2025-03-17T18:45:17.7738990Z >>> # xdoctest: +REQUIRES(LINUX) 2025-03-17T18:45:17.7739317Z >>> import functools 2025-03-17T18:45:17.7739585Z >>> 2025-03-17T18:45:17.7739847Z >>> x = torch.rand(10, 10, requires_grad=True) 2025-03-17T18:45:17.7740227Z >>> y = torch.rand(10, 10, requires_grad=True) 2025-03-17T18:45:17.7740564Z >>> 2025-03-17T18:45:17.7740797Z >>> ops_to_save = [ 2025-03-17T18:45:17.7741193Z >>> torch.ops.aten.mm.default, 2025-03-17T18:45:17.7741515Z >>> ] 2025-03-17T18:45:17.7741739Z >>> 2025-03-17T18:45:17.7742004Z >>> def policy_fn(ctx, op, *args, **kwargs): 2025-03-17T18:45:17.7742364Z >>> if op in ops_to_save: 2025-03-17T18:45:17.7742711Z >>> return CheckpointPolicy.MUST_SAVE 2025-03-17T18:45:17.7743060Z >>> else: 2025-03-17T18:45:17.7743351Z >>> return CheckpointPolicy.PREFER_RECOMPUTE 2025-03-17T18:45:17.7743710Z >>> 2025-03-17T18:45:17.7744121Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, policy_fn) 2025-03-17T18:45:17.7744622Z >>> 2025-03-17T18:45:17.7744856Z >>> # or equivalently 2025-03-17T18:45:17.7745329Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, ops_to_save) 2025-03-17T18:45:17.7745831Z >>> 2025-03-17T18:45:17.7746059Z >>> def fn(x, y): 2025-03-17T18:45:17.7746440Z >>> return torch.sigmoid(torch.matmul(torch.matmul(x, y), y)) * y 2025-03-17T18:45:17.7746934Z >>> 2025-03-17T18:45:17.7747213Z >>> out = torch.utils.checkpoint.checkpoint( 2025-03-17T18:45:17.7747582Z >>> fn, x, y, 2025-03-17T18:45:17.7747864Z >>> use_reentrant=False, 2025-03-17T18:45:17.7748188Z >>> context_fn=context_fn, 2025-03-17T18:45:17.7748500Z >>> ) 2025-03-17T18:45:17.7748626Z 2025-03-17T18:45:17.7748899Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.7749280Z 2025-03-17T18:45:17.7933945Z msg = Cannot scrape callname=CppExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1064. 2025-03-17T18:45:17.7935150Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.7935543Z 2025-03-17T18:45:17.7935715Z Create a :class:`setuptools.Extension` for C++. 2025-03-17T18:45:17.7935979Z 2025-03-17T18:45:17.7936242Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:17.7937003Z bare minimum (but often sufficient) arguments to build a C++ extension. 2025-03-17T18:45:17.7937361Z 2025-03-17T18:45:17.7937591Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:17.7938087Z constructor. Full list arguments can be found at 2025-03-17T18:45:17.7938775Z https://setuptools.pypa.io/en/latest/userguide/ext_modules.html#extension-api-reference 2025-03-17T18:45:17.7939239Z 2025-03-17T18:45:17.7939367Z .. warning:: 2025-03-17T18:45:17.7939756Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:17.7940345Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:17.7941036Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:17.7941884Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:17.7942601Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:17.7943613Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:17.7944101Z register the ops through the dispatcher. 2025-03-17T18:45:17.7944358Z 2025-03-17T18:45:17.7944596Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:17.7945169Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:17.7945724Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:17.7946290Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:17.7946928Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:17.7947552Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:17.7948265Z CPython version of choice. 2025-03-17T18:45:17.7948599Z 2025-03-17T18:45:17.7948701Z Example: 2025-03-17T18:45:17.7948962Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.7949500Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:17.7949915Z >>> from setuptools import setup 2025-03-17T18:45:17.7950426Z >>> from torch.utils.cpp_extension import BuildExtension, CppExtension 2025-03-17T18:45:17.7950934Z >>> setup( 2025-03-17T18:45:17.7951192Z ... name='extension', 2025-03-17T18:45:17.7951526Z ... ext_modules=[ 2025-03-17T18:45:17.7951827Z ... CppExtension( 2025-03-17T18:45:17.7952133Z ... name='extension', 2025-03-17T18:45:17.7952539Z ... sources=['extension.cpp'], 2025-03-17T18:45:17.7952898Z ... extra_compile_args=['-g'], 2025-03-17T18:45:17.7953360Z ... extra_link_args=['-Wl,--no-as-needed', '-lm']) 2025-03-17T18:45:17.7953739Z ... ], 2025-03-17T18:45:17.7954033Z ... cmdclass={ 2025-03-17T18:45:17.7954324Z ... 'build_ext': BuildExtension 2025-03-17T18:45:17.7954706Z ... }) 2025-03-17T18:45:17.7954861Z 2025-03-17T18:45:17.7955128Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.7955581Z 2025-03-17T18:45:17.7956266Z msg = Cannot scrape callname=CUDAExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1134. 2025-03-17T18:45:17.7957396Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.7957939Z 2025-03-17T18:45:17.7958189Z Create a :class:`setuptools.Extension` for CUDA/C++. 2025-03-17T18:45:17.7958601Z 2025-03-17T18:45:17.7958906Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:17.7959684Z bare minimum (but often sufficient) arguments to build a CUDA/C++ 2025-03-17T18:45:17.7960320Z extension. This includes the CUDA include path, library path and runtime 2025-03-17T18:45:17.7960819Z library. 2025-03-17T18:45:17.7960969Z 2025-03-17T18:45:17.7961202Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:17.7961760Z constructor. Full list arguments can be found at 2025-03-17T18:45:17.7962426Z https://setuptools.pypa.io/en/latest/userguide/ext_modules.html#extension-api-reference 2025-03-17T18:45:17.7962880Z 2025-03-17T18:45:17.7963000Z .. warning:: 2025-03-17T18:45:17.7963426Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:17.7964105Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:17.7964708Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:17.7965280Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:17.7965938Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:17.7966570Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:17.7967135Z register the ops through the dispatcher. 2025-03-17T18:45:17.7967492Z 2025-03-17T18:45:17.7967855Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:17.7968682Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:17.7969558Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:17.7970128Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:17.7970700Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:17.7971274Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:17.7971740Z CPython version of choice. 2025-03-17T18:45:17.7971951Z 2025-03-17T18:45:17.7972046Z Example: 2025-03-17T18:45:17.7972286Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.7972629Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:17.7973023Z >>> from setuptools import setup 2025-03-17T18:45:17.7973481Z >>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension 2025-03-17T18:45:17.7973935Z >>> setup( 2025-03-17T18:45:17.7974274Z ... name='cuda_extension', 2025-03-17T18:45:17.7974594Z ... ext_modules=[ 2025-03-17T18:45:17.7974871Z ... CUDAExtension( 2025-03-17T18:45:17.7975189Z ... name='cuda_extension', 2025-03-17T18:45:17.7975601Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:17.7976044Z ... extra_compile_args={'cxx': ['-g'], 2025-03-17T18:45:17.7976428Z ... 'nvcc': ['-O2']}, 2025-03-17T18:45:17.7976838Z ... extra_link_args=['-Wl,--no-as-needed', '-lcuda']) 2025-03-17T18:45:17.7977225Z ... ], 2025-03-17T18:45:17.7977469Z ... cmdclass={ 2025-03-17T18:45:17.7977763Z ... 'build_ext': BuildExtension 2025-03-17T18:45:17.7978091Z ... }) 2025-03-17T18:45:17.7978240Z 2025-03-17T18:45:17.7978347Z Compute capabilities: 2025-03-17T18:45:17.7978516Z 2025-03-17T18:45:17.7978843Z By default the extension will be compiled to run on all archs of the cards visible during the 2025-03-17T18:45:17.7979577Z building process of the extension, plus PTX. If down the road a new card is installed the 2025-03-17T18:45:17.7980304Z extension may need to be recompiled. If a visible card has a compute capability (CC) that's 2025-03-17T18:45:17.7981044Z newer than the newest version for which your nvcc can build fully-compiled binaries, PyTorch 2025-03-17T18:45:17.7981774Z will make nvcc fall back to building kernels with the newest version of PTX your nvcc does 2025-03-17T18:45:17.7982323Z support (see below for details on PTX). 2025-03-17T18:45:17.7982605Z 2025-03-17T18:45:17.7982934Z You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which 2025-03-17T18:45:17.7983508Z CCs you want the extension to support: 2025-03-17T18:45:17.7983734Z 2025-03-17T18:45:17.7983946Z ``TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py`` 2025-03-17T18:45:17.7984510Z ``TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python build_my_extension.py`` 2025-03-17T18:45:17.7984875Z 2025-03-17T18:45:17.7985213Z The +PTX option causes extension kernel binaries to include PTX instructions for the specified 2025-03-17T18:45:17.7985991Z CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= 2025-03-17T18:45:17.7986862Z the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with 2025-03-17T18:45:17.7987603Z CC >= 8.6). This improves your binary's forward compatibility. However, relying on older PTX to 2025-03-17T18:45:17.7988363Z provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on 2025-03-17T18:45:17.7989104Z those newer CCs. If you know exact CC(s) of the GPUs you want to target, you're always better 2025-03-17T18:45:17.7989845Z off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, 2025-03-17T18:45:17.7990623Z "8.0+PTX" would work functionally because it includes PTX that can runtime-compile for 8.6, but 2025-03-17T18:45:17.7991172Z "8.0 8.6" would be better. 2025-03-17T18:45:17.7991350Z 2025-03-17T18:45:17.7991670Z Note that while it's possible to include all supported archs, the more archs get included the 2025-03-17T18:45:17.7992404Z slower the building process will be, as it will build a separate kernel image for each arch. 2025-03-17T18:45:17.7992822Z 2025-03-17T18:45:17.7993176Z Note that CUDA-11.5 nvcc will hit internal compiler error while parsing torch/extension.h on Windows. 2025-03-17T18:45:17.7993867Z To workaround the issue, move python binding logic to pure C++ file. 2025-03-17T18:45:17.7994205Z 2025-03-17T18:45:17.7994313Z Example use: 2025-03-17T18:45:17.7994566Z #include 2025-03-17T18:45:17.7994905Z at::Tensor SigmoidAlphaBlendForwardCuda(....) 2025-03-17T18:45:17.7995185Z 2025-03-17T18:45:17.7995281Z Instead of: 2025-03-17T18:45:17.7995543Z #include 2025-03-17T18:45:17.7995977Z torch::Tensor SigmoidAlphaBlendForwardCuda(...) 2025-03-17T18:45:17.7996266Z 2025-03-17T18:45:17.7996549Z Currently open issue for nvcc bug: https://github.com/pytorch/pytorch/issues/69460 2025-03-17T18:45:17.7997481Z Complete workaround code example: https://github.com/facebookresearch/pytorch3d/commit/cb170ac024a949f1f9614ffe6af1c38d972f7d48 2025-03-17T18:45:17.7998132Z 2025-03-17T18:45:17.7998250Z Relocatable device code linking: 2025-03-17T18:45:17.7998469Z 2025-03-17T18:45:17.7998759Z If you want to reference device symbols across compilation units (across object files), 2025-03-17T18:45:17.7999443Z the object files need to be built with `relocatable device code` (-rdc=true or -dc). 2025-03-17T18:45:17.8000213Z An exception to this rule is "dynamic parallelism" (nested kernel launches) which is not used a lot anymore. 2025-03-17T18:45:17.8001048Z `Relocatable device code` is less optimized so it needs to be used only on object files that need it. 2025-03-17T18:45:17.8002098Z Using `-dlto` (Device Link Time Optimization) at the device code compilation step and `dlink` step 2025-03-17T18:45:17.8002869Z helps reduce the protentional perf degradation of `-rdc`. 2025-03-17T18:45:17.8003463Z Note that it needs to be used at both steps to be useful. 2025-03-17T18:45:17.8003759Z 2025-03-17T18:45:17.8004267Z If you have `rdc` objects you need to have an extra `-dlink` (device linking) step before the CPU symbol linking step. 2025-03-17T18:45:17.8005052Z There is also a case where `-dlink` is used without `-rdc`: 2025-03-17T18:45:17.8005693Z when an extension is linked against a static lib containing rdc-compiled objects 2025-03-17T18:45:17.8006497Z like the [NVSHMEM library](https://developer.nvidia.com/nvshmem). 2025-03-17T18:45:17.8006871Z 2025-03-17T18:45:17.8007157Z Note: Ninja is required to build a CUDA Extension with RDC linking. 2025-03-17T18:45:17.8007513Z 2025-03-17T18:45:17.8007691Z Example: 2025-03-17T18:45:17.8008055Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.8008502Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:17.8008997Z >>> CUDAExtension( 2025-03-17T18:45:17.8009388Z ... name='cuda_extension', 2025-03-17T18:45:17.8009830Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:17.8010350Z ... dlink=True, 2025-03-17T18:45:17.8010794Z ... dlink_libraries=["dlink_lib"], 2025-03-17T18:45:17.8011239Z ... extra_compile_args={'cxx': ['-g'], 2025-03-17T18:45:17.8011765Z ... 'nvcc': ['-O2', '-rdc=true']}) 2025-03-17T18:45:17.8012092Z 2025-03-17T18:45:17.8012383Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.8012925Z 2025-03-17T18:45:17.8013569Z msg = Cannot scrape callname=SyclExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1325. 2025-03-17T18:45:17.8014638Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.8015102Z 2025-03-17T18:45:17.8015312Z Creates a :class:`setuptools.Extension` for SYCL/C++. 2025-03-17T18:45:17.8015685Z 2025-03-17T18:45:17.8015982Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:17.8016664Z bare minimum (but often sufficient) arguments to build a SYCL/C++ 2025-03-17T18:45:17.8017176Z extension. 2025-03-17T18:45:17.8017439Z 2025-03-17T18:45:17.8017685Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:17.8018210Z constructor. 2025-03-17T18:45:17.8018412Z 2025-03-17T18:45:17.8018582Z .. warning:: 2025-03-17T18:45:17.8019076Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:17.8019759Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:17.8020429Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:17.8021096Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:17.8021842Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:17.8022562Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:17.8023151Z register the ops through the dispatcher. 2025-03-17T18:45:17.8023449Z 2025-03-17T18:45:17.8023745Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:17.8024433Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:17.8025078Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:17.8025757Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:17.8026434Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:17.8027167Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:17.8027777Z CPython version of choice. 2025-03-17T18:45:17.8028005Z 2025-03-17T18:45:17.8028173Z Example: 2025-03-17T18:45:17.8028518Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.8028976Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:17.8029583Z >>> from torch.utils.cpp_extension import BuildExtension, SyclExtension 2025-03-17T18:45:17.8030161Z >>> setup( 2025-03-17T18:45:17.8030516Z ... name='xpu_extension', 2025-03-17T18:45:17.8030899Z ... ext_modules=[ 2025-03-17T18:45:17.8031327Z ... SyclExtension( 2025-03-17T18:45:17.8031683Z ... name='xpu_extension', 2025-03-17T18:45:17.8032168Z ... sources=['extension.cpp', 'extension_kernel.cpp'], 2025-03-17T18:45:17.8032844Z ... extra_compile_args={'cxx': ['-g', '-std=c++20', '-fPIC']}) 2025-03-17T18:45:17.8033356Z ... ], 2025-03-17T18:45:17.8033722Z ... cmdclass={ 2025-03-17T18:45:17.8034122Z ... 'build_ext': BuildExtension 2025-03-17T18:45:17.8034540Z ... }) 2025-03-17T18:45:17.8034831Z 2025-03-17T18:45:17.8035194Z By default the extension will be compiled to run on all archs of the cards visible during the 2025-03-17T18:45:17.8035993Z building process of the extension. If down the road a new card is installed the 2025-03-17T18:45:17.8036728Z extension may need to be recompiled. You can override the default behavior using 2025-03-17T18:45:17.8037841Z `TORCH_XPU_ARCH_LIST` to explicitly specify which device architectures you want the extension 2025-03-17T18:45:17.8038490Z to support: 2025-03-17T18:45:17.8038679Z 2025-03-17T18:45:17.8038959Z ``TORCH_XPU_ARCH_LIST="pvc,xe-lpg" python build_my_extension.py`` 2025-03-17T18:45:17.8039375Z 2025-03-17T18:45:17.8039715Z Note that while it's possible to include all supported archs, the more archs get included the 2025-03-17T18:45:17.8040547Z slower the building process will be, as it will build a separate kernel image for each arch. 2025-03-17T18:45:17.8041053Z 2025-03-17T18:45:17.8041253Z Note: Ninja is required to build SyclExtension. 2025-03-17T18:45:17.8041587Z 2025-03-17T18:45:17.8041894Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.8042403Z 2025-03-17T18:45:17.8043057Z msg = Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1502. 2025-03-17T18:45:17.8044078Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.8044508Z 2025-03-17T18:45:17.8044733Z Load a PyTorch C++ extension just-in-time (JIT). 2025-03-17T18:45:17.8045008Z 2025-03-17T18:45:17.8045361Z To load an extension, a Ninja build file is emitted, which is used to 2025-03-17T18:45:17.8046019Z compile the given sources into a dynamic library. This library is 2025-03-17T18:45:17.8046654Z subsequently loaded into the current Python process as a module and 2025-03-17T18:45:17.8047276Z returned from this function, ready for use. 2025-03-17T18:45:17.8047555Z 2025-03-17T18:45:17.8047844Z By default, the directory to which the build file is emitted and the 2025-03-17T18:45:17.8048631Z resulting library compiled to is ``/torch_extensions/``, where 2025-03-17T18:45:17.8049334Z ```` is the temporary folder on the current platform and ```` 2025-03-17T18:45:17.8050057Z the name of the extension. This location can be overridden in two ways. 2025-03-17T18:45:17.8050752Z First, if the ``TORCH_EXTENSIONS_DIR`` environment variable is set, it 2025-03-17T18:45:17.8051417Z replaces ``/torch_extensions`` and all extensions will be compiled 2025-03-17T18:45:17.8052072Z into subfolders of this directory. Second, if the ``build_directory`` 2025-03-17T18:45:17.8063918Z argument to this function is supplied, it overrides the entire path, i.e. 2025-03-17T18:45:17.8064493Z the library will be compiled into that folder directly. 2025-03-17T18:45:17.8064783Z 2025-03-17T18:45:17.8065021Z To compile the sources, the default system compiler (``c++``) is used, 2025-03-17T18:45:17.8065632Z which can be overridden by setting the ``CXX`` environment variable. To pass 2025-03-17T18:45:17.8066252Z additional arguments to the compilation process, ``extra_cflags`` or 2025-03-17T18:45:17.8066943Z ``extra_ldflags`` can be provided. For example, to compile your extension 2025-03-17T18:45:17.8067526Z with optimizations, pass ``extra_cflags=['-O3']``. You can also use 2025-03-17T18:45:17.8068045Z ``extra_cflags`` to pass further include directories. 2025-03-17T18:45:17.8068321Z 2025-03-17T18:45:17.8068583Z CUDA support with mixed compilation is provided. Simply pass CUDA source 2025-03-17T18:45:17.8069152Z files (``.cu`` or ``.cuh``) along with other sources. Such files will be 2025-03-17T18:45:17.8069863Z detected and compiled with nvcc rather than the C++ compiler. This includes 2025-03-17T18:45:17.8070464Z passing the CUDA lib64 directory as a library directory, and linking 2025-03-17T18:45:17.8070969Z ``cudart``. You can pass additional flags to nvcc via 2025-03-17T18:45:17.8071460Z ``extra_cuda_cflags``, just like with ``extra_cflags`` for C++. Various 2025-03-17T18:45:17.8072047Z heuristics for finding the CUDA install directory are used, which usually 2025-03-17T18:45:17.8072628Z work fine. If not, setting the ``CUDA_HOME`` environment variable is the 2025-03-17T18:45:17.8073071Z safest option. 2025-03-17T18:45:17.8073228Z 2025-03-17T18:45:17.8073528Z SYCL support with mixed compilation is provided. Simply pass SYCL source 2025-03-17T18:45:17.8074107Z files (``.sycl``) along with other sources. Such files will be detected 2025-03-17T18:45:17.8074665Z and compiled with SYCL compiler (such as Intel DPC++ Compiler) rather 2025-03-17T18:45:17.8075228Z than the C++ compiler. You can pass additional flags to SYCL compiler 2025-03-17T18:45:17.8075766Z via ``extra_sycl_cflags``, just like with ``extra_cflags`` for C++. 2025-03-17T18:45:17.8076308Z SYCL compiler is expected to be found via system PATH environment 2025-03-17T18:45:17.8076737Z variable. 2025-03-17T18:45:17.8076877Z 2025-03-17T18:45:17.8076966Z Args: 2025-03-17T18:45:17.8077319Z name: The name of the extension to build. This MUST be the same as the 2025-03-17T18:45:17.8077777Z name of the pybind11 module! 2025-03-17T18:45:17.8078218Z sources: A list of relative or absolute paths to C++ source files. 2025-03-17T18:45:17.8078790Z extra_cflags: optional list of compiler flags to forward to the build. 2025-03-17T18:45:17.8079381Z extra_cuda_cflags: optional list of compiler flags to forward to nvcc 2025-03-17T18:45:17.8079848Z when building CUDA sources. 2025-03-17T18:45:17.8080297Z extra_sycl_cflags: optional list of compiler flags to forward to SYCL 2025-03-17T18:45:17.8080779Z compiler when building SYCL sources. 2025-03-17T18:45:17.8081247Z extra_ldflags: optional list of linker flags to forward to the build. 2025-03-17T18:45:17.8081824Z extra_include_paths: optional list of include directories to forward 2025-03-17T18:45:17.8082277Z to the build. 2025-03-17T18:45:17.8082632Z build_directory: optional path to use as build workspace. 2025-03-17T18:45:17.8083595Z verbose: If ``True``, turns on verbose logging of load steps. 2025-03-17T18:45:17.8084146Z with_cuda: Determines whether CUDA headers and libraries are added to 2025-03-17T18:45:17.8084666Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:17.8085175Z automatically determined based on the existence of ``.cu`` or 2025-03-17T18:45:17.8085695Z ``.cuh`` in ``sources``. Set it to `True`` to force CUDA headers 2025-03-17T18:45:17.8086113Z and libraries to be included. 2025-03-17T18:45:17.8086569Z with_sycl: Determines whether SYCL headers and libraries are added to 2025-03-17T18:45:17.8087099Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:17.8087609Z automatically determined based on the existence of ``.sycl`` in 2025-03-17T18:45:17.8088121Z ``sources``. Set it to `True`` to force SYCL headers and 2025-03-17T18:45:17.8088522Z libraries to be included. 2025-03-17T18:45:17.8088950Z is_python_module: If ``True`` (default), imports the produced shared 2025-03-17T18:45:17.8089482Z library as a Python module. If ``False``, behavior depends on 2025-03-17T18:45:17.8089908Z ``is_standalone``. 2025-03-17T18:45:17.8090306Z is_standalone: If ``False`` (default) loads the constructed extension 2025-03-17T18:45:17.8090854Z into the process as a plain dynamic library. If ``True``, build a 2025-03-17T18:45:17.8091292Z standalone executable. 2025-03-17T18:45:17.8091487Z 2025-03-17T18:45:17.8091591Z Returns: 2025-03-17T18:45:17.8091842Z If ``is_python_module`` is ``True``: 2025-03-17T18:45:17.8092290Z Returns the loaded PyTorch extension as a Python module. 2025-03-17T18:45:17.8092588Z 2025-03-17T18:45:17.8092810Z If ``is_python_module`` is ``False`` and ``is_standalone`` is ``False``: 2025-03-17T18:45:17.8093370Z Returns nothing. (The shared library is loaded into the process as 2025-03-17T18:45:17.8093817Z a side effect.) 2025-03-17T18:45:17.8093985Z 2025-03-17T18:45:17.8094119Z If ``is_standalone`` is ``True``. 2025-03-17T18:45:17.8094550Z Return the path to the executable. (On Windows, TORCH_LIB_PATH is 2025-03-17T18:45:17.8095065Z added to the PATH environment variable as a side effect.) 2025-03-17T18:45:17.8095426Z 2025-03-17T18:45:17.8095529Z Example: 2025-03-17T18:45:17.8095755Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.8096080Z >>> from torch.utils.cpp_extension import load 2025-03-17T18:45:17.8096447Z >>> module = load( 2025-03-17T18:45:17.8096725Z ... name='extension', 2025-03-17T18:45:17.8097088Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:17.8097486Z ... extra_cflags=['-O2'], 2025-03-17T18:45:17.8097796Z ... verbose=True) 2025-03-17T18:45:17.8097981Z 2025-03-17T18:45:17.8098246Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.8098639Z 2025-03-17T18:45:17.8099182Z msg = Cannot scrape callname=load_inline in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1811. 2025-03-17T18:45:17.8100111Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.8100514Z 2025-03-17T18:45:17.8100735Z Load a PyTorch C++ extension just-in-time (JIT) from string sources. 2025-03-17T18:45:17.8101082Z 2025-03-17T18:45:17.8101323Z This function behaves exactly like :func:`load`, but takes its sources as 2025-03-17T18:45:17.8101923Z strings rather than filenames. These strings are stored to files in the 2025-03-17T18:45:17.8102509Z build directory, after which the behavior of :func:`load_inline` is 2025-03-17T18:45:17.8102961Z identical to :func:`load`. 2025-03-17T18:45:17.8103145Z 2025-03-17T18:45:17.8103252Z See `the 2025-03-17T18:45:17.8103724Z tests `_ 2025-03-17T18:45:17.8104313Z for good examples of using this function. 2025-03-17T18:45:17.8104603Z 2025-03-17T18:45:17.8104858Z Sources may omit two required parts of a typical non-inline C++ extension: 2025-03-17T18:45:17.8105481Z the necessary header includes, as well as the (pybind11) binding code. More 2025-03-17T18:45:17.8106111Z precisely, strings passed to ``cpp_sources`` are first concatenated into a 2025-03-17T18:45:17.8106792Z single ``.cpp`` file. This file is then prepended with ``#include 2025-03-17T18:45:17.8107222Z ``. 2025-03-17T18:45:17.8107392Z 2025-03-17T18:45:17.8107646Z Furthermore, if the ``functions`` argument is supplied, bindings will be 2025-03-17T18:45:17.8108259Z automatically generated for each function specified. ``functions`` can 2025-03-17T18:45:17.8108859Z either be a list of function names, or a dictionary mapping from function 2025-03-17T18:45:17.8109452Z names to docstrings. If a list is given, the name of each function is used 2025-03-17T18:45:17.8109912Z as its docstring. 2025-03-17T18:45:17.8110065Z 2025-03-17T18:45:17.8110306Z The sources in ``cuda_sources`` are concatenated into a separate ``.cu`` 2025-03-17T18:45:17.8110828Z file and prepended with ``torch/types.h``, ``cuda.h`` and 2025-03-17T18:45:17.8111347Z ``cuda_runtime.h`` includes. The ``.cpp`` and ``.cu`` files are compiled 2025-03-17T18:45:17.8111932Z separately, but ultimately linked into a single library. Note that no 2025-03-17T18:45:17.8112524Z bindings are generated for functions in ``cuda_sources`` per se. To bind 2025-03-17T18:45:17.8113115Z to a CUDA kernel, you must create a C++ function that calls it, and either 2025-03-17T18:45:17.8113690Z declare or define this C++ function in one of the ``cpp_sources`` (and 2025-03-17T18:45:17.8114188Z include its name in ``functions``). 2025-03-17T18:45:17.8114414Z 2025-03-17T18:45:17.8114642Z The sources in ``sycl_sources`` are concatenated into a separate ``.sycl`` 2025-03-17T18:45:17.8115221Z file and prepended with ``torch/types.h``, ``sycl/sycl.hpp`` includes. 2025-03-17T18:45:17.8115782Z The ``.cpp`` and ``.sycl`` files are compiled separately, but ultimately 2025-03-17T18:45:17.8116339Z linked into a single library. Note that no bindings are generated for 2025-03-17T18:45:17.8116912Z functions in ``sycl_sources`` per se. To bind to a SYCL kernel, you must 2025-03-17T18:45:17.8117478Z create a C++ function that calls it, and either declare or define this 2025-03-17T18:45:17.8118045Z C++ function in one of the ``cpp_sources`` (and include its name 2025-03-17T18:45:17.8118460Z in ``functions``). 2025-03-17T18:45:17.8118612Z 2025-03-17T18:45:17.8118812Z See :func:`load` for a description of arguments omitted below. 2025-03-17T18:45:17.8119122Z 2025-03-17T18:45:17.8119222Z Args: 2025-03-17T18:45:17.8119574Z cpp_sources: A string, or list of strings, containing C++ source code. 2025-03-17T18:45:17.8120154Z cuda_sources: A string, or list of strings, containing CUDA source code. 2025-03-17T18:45:17.8120739Z sycl_sources: A string, or list of strings, containing SYCL source code. 2025-03-17T18:45:17.8121310Z functions: A list of function names for which to generate function 2025-03-17T18:45:17.8121866Z bindings. If a dictionary is given, it should map function names to 2025-03-17T18:45:17.8122404Z docstrings (which are otherwise just the function names). 2025-03-17T18:45:17.8122950Z with_cuda: Determines whether CUDA headers and libraries are added to 2025-03-17T18:45:17.8123479Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:17.8123984Z automatically determined based on whether ``cuda_sources`` is 2025-03-17T18:45:17.8124478Z provided. Set it to ``True`` to force CUDA headers 2025-03-17T18:45:17.8124880Z and libraries to be included. 2025-03-17T18:45:17.8125323Z with_sycl: Determines whether SYCL headers and libraries are added to 2025-03-17T18:45:17.8125850Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:17.8126350Z automatically determined based on whether ``sycl_sources`` is 2025-03-17T18:45:17.8126895Z provided. Set it to ``True`` to force SYCL headers 2025-03-17T18:45:17.8127296Z and libraries to be included. 2025-03-17T18:45:17.8127738Z with_pytorch_error_handling: Determines whether pytorch error and 2025-03-17T18:45:17.8128290Z warning macros are handled by pytorch instead of pybind. To do 2025-03-17T18:45:17.8128851Z this, each function ``foo`` is called via an intermediary ``_safe_foo`` 2025-03-17T18:45:17.8129417Z function. This redirection might cause issues in obscure cases 2025-03-17T18:45:17.8129951Z of cpp. This flag should be set to ``False`` when this redirect 2025-03-17T18:45:17.8130368Z causes issues. 2025-03-17T18:45:17.8130543Z 2025-03-17T18:45:17.8130637Z Example: 2025-03-17T18:45:17.8130927Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:17.8131364Z >>> from torch.utils.cpp_extension import load_inline 2025-03-17T18:45:17.8131743Z >>> source = """ 2025-03-17T18:45:17.8132065Z at::Tensor sin_add(at::Tensor x, at::Tensor y) { 2025-03-17T18:45:17.8132438Z return x.sin() + y.sin(); 2025-03-17T18:45:17.8132724Z } 2025-03-17T18:45:17.8132941Z """ 2025-03-17T18:45:17.8133211Z >>> module = load_inline(name='inline_extension', 2025-03-17T18:45:17.8133591Z ... cpp_sources=[source], 2025-03-17T18:45:17.8133960Z ... functions=['sin_add']) 2025-03-17T18:45:17.8134208Z 2025-03-17T18:45:17.8134303Z .. note:: 2025-03-17T18:45:17.8134690Z Since load_inline will just-in-time compile the source code, please ensure 2025-03-17T18:45:17.8135327Z that you have the right toolchains installed in the runtime. For example, 2025-03-17T18:45:17.8135919Z when loading C++, make sure a C++ compiler is available. If you're loading 2025-03-17T18:45:17.8136522Z a CUDA extension, you will need to additionally install the corresponding CUDA 2025-03-17T18:45:17.8137340Z toolkit (nvcc and any other dependencies your code has). Compiling toolchains 2025-03-17T18:45:17.8137972Z are not included when you install torch and must be additionally installed. 2025-03-17T18:45:17.8138350Z 2025-03-17T18:45:17.8138612Z During compiling, by default, the Ninja backend uses #CPUS + 2 workers to build 2025-03-17T18:45:17.8139297Z the extension. This may use up too many resources on some systems. One 2025-03-17T18:45:17.8139884Z can control the number of workers by setting the `MAX_JOBS` environment 2025-03-17T18:45:17.8140360Z variable to a non-negative number. 2025-03-17T18:45:17.8140596Z 2025-03-17T18:45:17.8140859Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.8141252Z 2025-03-17T18:45:17.8144652Z msg = Cannot scrape callname=ThroughputBenchmark in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/throughput_benchmark.py line=61. 2025-03-17T18:45:17.8145672Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.8146086Z 2025-03-17T18:45:17.8146395Z This class is a wrapper around a c++ component throughput_benchmark::ThroughputBenchmark. 2025-03-17T18:45:17.8146894Z 2025-03-17T18:45:17.8147200Z This wrapper on the throughput_benchmark::ThroughputBenchmark component is responsible 2025-03-17T18:45:17.8147904Z for executing a PyTorch module (nn.Module or ScriptModule) under an inference 2025-03-17T18:45:17.8148531Z server like load. It can emulate multiple calling threads to a single module 2025-03-17T18:45:17.8149149Z provided. In the future we plan to enhance this component to support inter and 2025-03-17T18:45:17.8149783Z intra-op parallelism as well as multiple models running in a single process. 2025-03-17T18:45:17.8150166Z 2025-03-17T18:45:17.8150427Z Please note that even though nn.Module is supported, it might incur an overhead 2025-03-17T18:45:17.8151042Z from the need to hold GIL every time we execute Python code or pass around 2025-03-17T18:45:17.8151787Z inputs as Python objects. As soon as you have a ScriptModule version of your 2025-03-17T18:45:17.8152414Z model for inference deployment it is better to switch to using it in this 2025-03-17T18:45:17.8152886Z benchmark. 2025-03-17T18:45:17.8153034Z 2025-03-17T18:45:17.8153130Z Example:: 2025-03-17T18:45:17.8153279Z 2025-03-17T18:45:17.8153408Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:17.8153806Z >>> from torch.utils import ThroughputBenchmark 2025-03-17T18:45:17.8154214Z >>> bench = ThroughputBenchmark(my_module) 2025-03-17T18:45:17.8154629Z >>> # Pre-populate benchmark's data set with the inputs 2025-03-17T18:45:17.8155026Z >>> for input in inputs: 2025-03-17T18:45:17.8155447Z ... # Both args and kwargs work, same as any PyTorch Module / ScriptModule 2025-03-17T18:45:17.8155948Z ... bench.add_input(input[0], x2=input[1]) 2025-03-17T18:45:17.8156406Z >>> # Inputs supplied above are randomly used during the execution 2025-03-17T18:45:17.8156841Z >>> stats = bench.benchmark( 2025-03-17T18:45:17.8157165Z ... num_calling_threads=4, 2025-03-17T18:45:17.8157491Z ... num_warmup_iters = 100, 2025-03-17T18:45:17.8157811Z ... num_iters = 1000, 2025-03-17T18:45:17.8158087Z ... ) 2025-03-17T18:45:17.8158407Z >>> print("Avg latency (ms): {}".format(stats.latency_avg_ms)) 2025-03-17T18:45:17.8158903Z >>> print("Number of iterations: {}".format(stats.num_iters)) 2025-03-17T18:45:17.8159216Z 2025-03-17T18:45:17.8159470Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.8159862Z 2025-03-17T18:45:17.9051719Z msg = Cannot scrape callname=DistributedSampler in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/distributed.py line=18. 2025-03-17T18:45:17.9052867Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:17.9053471Z Sampler that restricts data loading to a subset of the dataset. 2025-03-17T18:45:17.9053807Z 2025-03-17T18:45:17.9053960Z It is especially useful in conjunction with 2025-03-17T18:45:17.9054470Z :class:`torch.nn.parallel.DistributedDataParallel`. In such a case, each 2025-03-17T18:45:17.9055115Z process can pass a :class:`~torch.utils.data.DistributedSampler` instance as a 2025-03-17T18:45:17.9055760Z :class:`~torch.utils.data.DataLoader` sampler, and load a subset of the 2025-03-17T18:45:17.9056305Z original dataset that is exclusive to it. 2025-03-17T18:45:17.9056561Z 2025-03-17T18:45:17.9056664Z .. note:: 2025-03-17T18:45:17.9057051Z Dataset is assumed to be of constant size and that any instance of it always 2025-03-17T18:45:17.9057570Z returns the same elements in the same order. 2025-03-17T18:45:17.9057837Z 2025-03-17T18:45:17.9057932Z Args: 2025-03-17T18:45:17.9058191Z dataset: Dataset used for sampling. 2025-03-17T18:45:17.9058655Z num_replicas (int, optional): Number of processes participating in 2025-03-17T18:45:17.9059262Z distributed training. By default, :attr:`world_size` is retrieved from the 2025-03-17T18:45:17.9059766Z current distributed group. 2025-03-17T18:45:17.9060249Z rank (int, optional): Rank of the current process within :attr:`num_replicas`. 2025-03-17T18:45:17.9060839Z By default, :attr:`rank` is retrieved from the current distributed 2025-03-17T18:45:17.9061271Z group. 2025-03-17T18:45:17.9061663Z shuffle (bool, optional): If ``True`` (default), sampler will shuffle the 2025-03-17T18:45:17.9062120Z indices. 2025-03-17T18:45:17.9062483Z seed (int, optional): random seed used to shuffle the sampler if 2025-03-17T18:45:17.9063013Z :attr:`shuffle=True`. This number should be identical across all 2025-03-17T18:45:17.9063514Z processes in the distributed group. Default: ``0``. 2025-03-17T18:45:17.9064029Z drop_last (bool, optional): if ``True``, then the sampler will drop the 2025-03-17T18:45:17.9064660Z tail of the data to make it evenly divisible across the number of 2025-03-17T18:45:17.9065197Z replicas. If ``False``, the sampler will add extra indices to make 2025-03-17T18:45:17.9065750Z the data evenly divisible across the replicas. Default: ``False``. 2025-03-17T18:45:17.9066089Z 2025-03-17T18:45:17.9066205Z .. warning:: 2025-03-17T18:45:17.9066643Z In distributed mode, calling the :meth:`set_epoch` method at 2025-03-17T18:45:17.9067233Z the beginning of each epoch **before** creating the :class:`DataLoader` iterator 2025-03-17T18:45:17.9067891Z is necessary to make shuffling work properly across multiple epochs. Otherwise, 2025-03-17T18:45:17.9068409Z the same ordering will be always used. 2025-03-17T18:45:17.9068661Z 2025-03-17T18:45:17.9068758Z Example:: 2025-03-17T18:45:17.9068907Z 2025-03-17T18:45:17.9069016Z >>> # xdoctest: +SKIP 2025-03-17T18:45:17.9069441Z >>> sampler = DistributedSampler(dataset) if is_distributed else None 2025-03-17T18:45:17.9069974Z >>> loader = DataLoader(dataset, shuffle=(sampler is None), 2025-03-17T18:45:17.9070394Z ... sampler=sampler) 2025-03-17T18:45:17.9070777Z >>> for epoch in range(start_epoch, n_epochs): 2025-03-17T18:45:17.9071146Z ... if is_distributed: 2025-03-17T18:45:17.9071476Z ... sampler.set_epoch(epoch) 2025-03-17T18:45:17.9071814Z ... train(loader) 2025-03-17T18:45:17.9072091Z 2025-03-17T18:45:17.9072473Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:17.9072851Z 2025-03-17T18:45:18.0862641Z gathering tests 2025-03-17T18:45:18.0873768Z running 709 test(s) 2025-03-17T18:45:18.0894308Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::typename:0, line 1077 <- wrt source file 2025-03-17T18:45:18.0901626Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::typename:0 2025-03-17T18:45:18.0902788Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::is_tensor:0, line 1113 <- wrt source file 2025-03-17T18:45:18.0906784Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::is_tensor:0 2025-03-17T18:45:18.0908607Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_device:0, line 1182 <- wrt source file 2025-03-17T18:45:18.0910045Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_device:0 2025-03-17T18:45:18.0911775Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_tensor_type:0, line 1231 <- wrt source file 2025-03-17T18:45:18.0913523Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_tensor_type:0 2025-03-17T18:45:18.0914765Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_dtype:0, line 1268 <- wrt source file 2025-03-17T18:45:18.0916027Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::set_default_dtype:0 2025-03-17T18:45:18.0917351Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::use_deterministic_algorithms:0, line 1423 <- wrt source file 2025-03-17T18:45:18.0918950Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::use_deterministic_algorithms:0 2025-03-17T18:45:18.0920513Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::compile:0, line 2523 <- wrt source file 2025-03-17T18:45:18.0921660Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::compile:0 2025-03-17T18:45:18.0923059Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::_is_device_backend_autoload_enabled:0, line 2785 <- wrt source file 2025-03-17T18:45:18.0924451Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/__init__.py::_is_device_backend_autoload_enabled:0 2025-03-17T18:45:18.0925810Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_C.cpython-313-x86_64-linux-gnu.so::Generator:0, line 15 <- wrt source file 2025-03-17T18:45:18.0927188Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_C.cpython-313-x86_64-linux-gnu.so::Generator:0 2025-03-17T18:45:18.0928541Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_C.cpython-313-x86_64-linux-gnu.so::_LinAlgError:0, line 5 <- wrt source file 2025-03-17T18:45:18.0929933Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_C.cpython-313-x86_64-linux-gnu.so::_LinAlgError:0 2025-03-17T18:45:18.0931197Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::custom_op:0, line 55 <- wrt source file 2025-03-17T18:45:18.0932362Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::custom_op:0 2025-03-17T18:45:18.0933479Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::impl:0, line 137 <- wrt source file 2025-03-17T18:45:18.0934597Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::impl:0 2025-03-17T18:45:18.0935734Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::impl_abstract:0, line 206 <- wrt source file 2025-03-17T18:45:18.1497479Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_custom_ops.py::impl_abstract:0 2025-03-17T18:45:18.1498882Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_namedtensor_internals.py::update_names:0, line 118 <- wrt source file 2025-03-17T18:45:18.1500221Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_namedtensor_internals.py::update_names:0 2025-03-17T18:45:18.1501480Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.register_hook:0, line 672 <- wrt source file 2025-03-17T18:45:18.1505494Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.register_hook:0 2025-03-17T18:45:18.1506894Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.register_post_accumulate_grad_hook:0, line 729 <- wrt source file 2025-03-17T18:45:18.1522444Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.register_post_accumulate_grad_hook:0 2025-03-17T18:45:18.1523775Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.refine_names:0, line 1347 <- wrt source file 2025-03-17T18:45:18.1640014Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.refine_names:0 2025-03-17T18:45:18.1643114Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.align_to:0, line 1392 <- wrt source file 2025-03-17T18:45:18.1647910Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.align_to:0 2025-03-17T18:45:18.1649096Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.rename:0, line 1465 <- wrt source file 2025-03-17T18:45:18.1655001Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.rename:0 2025-03-17T18:45:18.1656351Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.to_sparse_coo:0, line 1495 <- wrt source file 2025-03-17T18:45:18.1660153Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py::Tensor.to_sparse_coo:0 2025-03-17T18:45:18.1661382Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor_str.py::set_printoptions:0, line 53 <- wrt source file 2025-03-17T18:45:18.1678867Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor_str.py::set_printoptions:0 2025-03-17T18:45:18.1680107Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::broadcast_tensors:0, line 64 <- wrt source file 2025-03-17T18:45:18.1684887Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::broadcast_tensors:0 2025-03-17T18:45:18.1686113Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::broadcast_shapes:0, line 92 <- wrt source file 2025-03-17T18:45:18.1687855Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::broadcast_shapes:0 2025-03-17T18:45:18.1689032Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::split:0, line 193 <- wrt source file 2025-03-17T18:45:18.1699563Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::split:0 2025-03-17T18:45:18.1700697Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::einsum:0, line 307 <- wrt source file 2025-03-17T18:45:18.1747752Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::einsum:0 2025-03-17T18:45:18.1748988Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::_unique_consecutive_impl:0, line 1041 <- wrt source file 2025-03-17T18:45:18.1758931Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::_unique_consecutive_impl:0 2025-03-17T18:45:18.1760192Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::tensordot:0, line 1316 <- wrt source file 2025-03-17T18:45:18.1768666Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::tensordot:0 2025-03-17T18:45:18.1769952Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::cartesian_prod:0, line 1400 <- wrt source file 2025-03-17T18:45:18.1775677Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::cartesian_prod:0 2025-03-17T18:45:18.1776878Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::block_diag:0, line 1434 <- wrt source file 2025-03-17T18:45:18.1783782Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::block_diag:0 2025-03-17T18:45:18.1784940Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::cdist:0, line 1485 <- wrt source file 2025-03-17T18:45:18.1805557Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::cdist:0 2025-03-17T18:45:18.1806716Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_1d:0, line 1526 <- wrt source file 2025-03-17T18:45:18.1820698Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_1d:0 2025-03-17T18:45:18.1821879Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_2d:0, line 1562 <- wrt source file 2025-03-17T18:45:18.1836959Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_2d:0 2025-03-17T18:45:18.1838266Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_3d:0, line 1600 <- wrt source file 2025-03-17T18:45:18.1856881Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::atleast_3d:0 2025-03-17T18:45:18.1858033Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::norm:0, line 1773 <- wrt source file 2025-03-17T18:45:18.1887449Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::norm:0 2025-03-17T18:45:18.1888631Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::unravel_index:0, line 1940 <- wrt source file 2025-03-17T18:45:18.1912246Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::unravel_index:0 2025-03-17T18:45:18.1913808Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::chain_matmul:0, line 2040 <- wrt source file 2025-03-17T18:45:18.1915914Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::chain_matmul:0 2025-03-17T18:45:18.1917226Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::_lu_impl:0, line 2140 <- wrt source file 2025-03-17T18:45:18.1918393Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py::_lu_impl:0 2025-03-17T18:45:18.1919635Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py::list:0, line 468 <- wrt source file 2025-03-17T18:45:18.1920906Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py::list:0 2025-03-17T18:45:18.1921942Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py::help:0, line 528 <- wrt source file 2025-03-17T18:45:18.1923002Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py::help:0 2025-03-17T18:45:18.1924119Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library.define:0, line 151 <- wrt source file 2025-03-17T18:45:18.1925304Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library.define:0 2025-03-17T18:45:18.1926609Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library._impl_with_aoti_compile:0, line 251 <- wrt source file 2025-03-17T18:45:18.1930179Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library._impl_with_aoti_compile:0 2025-03-17T18:45:18.1931441Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library.impl:0, line 306 <- wrt source file 2025-03-17T18:45:18.1934659Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::Library.impl:0 2025-03-17T18:45:18.1935793Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::define:0, line 499 <- wrt source file 2025-03-17T18:45:18.1945820Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::define:0 2025-03-17T18:45:18.1946981Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::impl:0, line 605 <- wrt source file 2025-03-17T18:45:18.1965571Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::impl:0 2025-03-17T18:45:18.1967190Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_kernel:0, line 786 <- wrt source file 2025-03-17T18:45:18.1968416Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_kernel:0 2025-03-17T18:45:18.1969732Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_autocast:0, line 854 <- wrt source file 2025-03-17T18:45:18.1970959Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_autocast:0 2025-03-17T18:45:18.1972192Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_torch_dispatch:0, line 1190 <- wrt source file 2025-03-17T18:45:18.2058628Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_torch_dispatch:0 2025-03-17T18:45:18.2059857Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_vmap:0, line 1279 <- wrt source file 2025-03-17T18:45:18.2214067Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py::register_vmap:0 2025-03-17T18:45:18.2215311Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::get_ignored_functions:0, line 112 <- wrt source file 2025-03-17T18:45:18.2220120Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::get_ignored_functions:0 2025-03-17T18:45:18.2221385Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::get_testing_overrides:0, line 418 <- wrt source file 2025-03-17T18:45:18.2252434Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::get_testing_overrides:0 2025-03-17T18:45:18.2253697Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::wrap_torch_function:0, line 1571 <- wrt source file 2025-03-17T18:45:18.2255688Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::wrap_torch_function:0 2025-03-17T18:45:18.2256945Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::handle_torch_function:0, line 1706 <- wrt source file 2025-03-17T18:45:18.2258766Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::handle_torch_function:0 2025-03-17T18:45:18.2260064Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::is_tensor_method_or_property:0, line 1954 <- wrt source file 2025-03-17T18:45:18.2284539Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::is_tensor_method_or_property:0 2025-03-17T18:45:18.2285823Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::is_tensor_like:0, line 1973 <- wrt source file 2025-03-17T18:45:18.2294625Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/overrides.py::is_tensor_like:0 2025-03-17T18:45:18.2296188Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/quasirandom.py::SobolEngine:0, line 39 <- wrt source file 2025-03-17T18:45:18.2298000Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/quasirandom.py::SobolEngine:0 2025-03-17T18:45:18.2299244Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::add_safe_globals:0, line 299 <- wrt source file 2025-03-17T18:45:18.2300544Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::add_safe_globals:0 2025-03-17T18:45:18.2301997Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::safe_globals:0, line 324 <- wrt source file 2025-03-17T18:45:18.2304230Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::safe_globals:0 2025-03-17T18:45:18.2305875Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::skip_data:0, line 400 <- wrt source file 2025-03-17T18:45:18.2307538Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::skip_data:0 2025-03-17T18:45:18.2308911Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::register_package:0, line 472 <- wrt source file 2025-03-17T18:45:18.2310211Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::register_package:0 2025-03-17T18:45:18.2311407Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::save:0, line 948 <- wrt source file 2025-03-17T18:45:18.2312574Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py::save:0 2025-03-17T18:45:18.2313752Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/torch_version.py::TorchVersion:0, line 19 <- wrt source file 2025-03-17T18:45:18.2314984Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/torch_version.py::TorchVersion:0 2025-03-17T18:45:18.2316223Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/__init__.py::list_mode_options:0, line 306 <- wrt source file 2025-03-17T18:45:18.2317530Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/__init__.py::list_mode_options:0 2025-03-17T18:45:18.2319221Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/__init__.py::list_options:0, line 343 <- wrt source file 2025-03-17T18:45:18.2320486Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/__init__.py::list_options:0 2025-03-17T18:45:18.2321891Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/__init__.py::compute_required_storage_length:0, line 1793 <- wrt source file 2025-03-17T18:45:18.2323828Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims_common/__init__.py::compute_required_storage_length:0 2025-03-17T18:45:18.2325237Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py::current_accelerator:0, line 79 <- wrt source file 2025-03-17T18:45:18.2329027Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py::current_accelerator:0 2025-03-17T18:45:18.2330733Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::allow_in_graph:0, line 117 <- wrt source file 2025-03-17T18:45:18.2332024Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::allow_in_graph:0 2025-03-17T18:45:18.2333302Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::substitute_in_graph:0, line 171 <- wrt source file 2025-03-17T18:45:18.2776061Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::substitute_in_graph:0 2025-03-17T18:45:18.2778379Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::wrap_numpy:0, line 357 <- wrt source file 2025-03-17T18:45:18.2780727Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::wrap_numpy:0 2025-03-17T18:45:18.2783012Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_compiling:0, line 389 <- wrt source file 2025-03-17T18:45:18.2785419Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_compiling:0 2025-03-17T18:45:18.2787822Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_dynamo_compiling:0, line 410 <- wrt source file 2025-03-17T18:45:18.2790634Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_dynamo_compiling:0 2025-03-17T18:45:18.2793034Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_exporting:0, line 428 <- wrt source file 2025-03-17T18:45:18.2795382Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::is_exporting:0 2025-03-17T18:45:18.2797781Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::save_cache_artifacts:0, line 443 <- wrt source file 2025-03-17T18:45:18.2800258Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::save_cache_artifacts:0 2025-03-17T18:45:18.2802727Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::load_cache_artifacts:0, line 458 <- wrt source file 2025-03-17T18:45:18.2805315Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/compiler/__init__.py::load_cache_artifacts:0 2025-03-17T18:45:18.2807636Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::save:0, line 406 <- wrt source file 2025-03-17T18:45:18.2809807Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::save:0 2025-03-17T18:45:18.2811945Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::load:0, line 488 <- wrt source file 2025-03-17T18:45:18.2814155Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::load:0 2025-03-17T18:45:18.2816607Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::register_dataclass:0, line 586 <- wrt source file 2025-03-17T18:45:18.2819125Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/__init__.py::register_dataclass:0 2025-03-17T18:45:18.2821443Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::Future.add_done_callback:0, line 200 <- wrt source file 2025-03-17T18:45:18.2824045Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::Future.add_done_callback:0 2025-03-17T18:45:18.2826435Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::Future.set_exception:0, line 262 <- wrt source file 2025-03-17T18:45:18.2828745Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::Future.set_exception:0 2025-03-17T18:45:18.2830850Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::collect_all:0, line 295 <- wrt source file 2025-03-17T18:45:18.2833057Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py::collect_all:0 2025-03-17T18:45:18.2834918Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/__init__.py::annotate:0, line 147 <- wrt source file 2025-03-17T18:45:18.2837045Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/__init__.py::annotate:0 2025-03-17T18:45:18.2839035Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/monitor/__init__.py::TensorboardEventHandler:0, line 22 <- wrt source file 2025-03-17T18:45:18.2841401Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/monitor/__init__.py::TensorboardEventHandler:0 2025-03-17T18:45:18.2843659Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::as_nested_tensor:0, line 61 <- wrt source file 2025-03-17T18:45:18.2848309Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::as_nested_tensor:0 2025-03-17T18:45:18.2850671Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::nested_tensor:0, line 240 <- wrt source file 2025-03-17T18:45:18.2853425Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::nested_tensor:0 2025-03-17T18:45:18.2855607Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::narrow:0, line 315 <- wrt source file 2025-03-17T18:45:18.2902069Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::narrow:0 2025-03-17T18:45:18.2904390Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::nested_tensor_from_jagged:0, line 405 <- wrt source file 2025-03-17T18:45:18.2923255Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::nested_tensor_from_jagged:0 2025-03-17T18:45:18.2925664Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::masked_select:0, line 479 <- wrt source file 2025-03-17T18:45:18.2940176Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py::masked_select:0 2025-03-17T18:45:18.2942606Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py::check_sparse_tensor_invariants:0, line 475 <- wrt source file 2025-03-17T18:45:18.2957770Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py::check_sparse_tensor_invariants:0 2025-03-17T18:45:18.2960361Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py::as_sparse_gradcheck:0, line 561 <- wrt source file 2025-03-17T18:45:18.3011564Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py::as_sparse_gradcheck:0 2025-03-17T18:45:18.3013978Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/decorators.py::substitute_in_graph:0, line 317 <- wrt source file 2025-03-17T18:45:18.3016483Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/decorators.py::substitute_in_graph:0 2025-03-17T18:45:18.3019168Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py::VariableTracker.python_type:0, line 313 <- wrt source file 2025-03-17T18:45:18.3021928Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_dynamo/variables/base.py::VariableTracker.python_type:0 2025-03-17T18:45:18.3024605Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_export/utils.py::register_module_as_pytree_input_node:0, line 1233 <- wrt source file 2025-03-17T18:45:18.3027364Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_export/utils.py::register_module_as_pytree_input_node:0 2025-03-17T18:45:18.3029878Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py::aot_function:0, line 886 <- wrt source file 2025-03-17T18:45:18.3277135Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/aot_autograd.py::aot_function:0 2025-03-17T18:45:18.3279390Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/apis.py::grad:0, line 323 <- wrt source file 2025-03-17T18:45:18.3281547Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/apis.py::grad:0 2025-03-17T18:45:18.3283964Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/benchmark_utils.py::benchmark_utilization:0, line 184 <- wrt source file 2025-03-17T18:45:18.3286826Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/benchmark_utils.py::benchmark_utilization:0 2025-03-17T18:45:18.3289291Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::vjp:0, line 232 <- wrt source file 2025-03-17T18:45:18.3315959Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::vjp:0 2025-03-17T18:45:18.3318322Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jacrev:0, line 474 <- wrt source file 2025-03-17T18:45:18.3372355Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jacrev:0 2025-03-17T18:45:18.3828242Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jvp:0, line 1023 <- wrt source file 2025-03-17T18:45:18.3830704Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jvp:0 2025-03-17T18:45:18.3833044Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jacfwd:0, line 1181 <- wrt source file 2025-03-17T18:45:18.3890072Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::jacfwd:0 2025-03-17T18:45:18.3892463Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::hessian:0, line 1341 <- wrt source file 2025-03-17T18:45:18.3908020Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::hessian:0 2025-03-17T18:45:18.3910553Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::functionalize:0, line 1505 <- wrt source file 2025-03-17T18:45:18.3913175Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::functionalize:0 2025-03-17T18:45:18.3915672Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::linearize:0, line 1704 <- wrt source file 2025-03-17T18:45:18.4051859Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/eager_transforms.py::linearize:0 2025-03-17T18:45:18.4054447Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/functional_call.py::functional_call:0, line 36 <- wrt source file 2025-03-17T18:45:18.4057555Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/functional_call.py::functional_call:0 2025-03-17T18:45:18.4059965Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/fx_minifier.py::minifier:0, line 194 <- wrt source file 2025-03-17T18:45:18.4062327Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/fx_minifier.py::minifier:0 2025-03-17T18:45:18.4065047Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py::CompilerWrapper.post_compile:0, line 115 <- wrt source file 2025-03-17T18:45:18.4068245Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py::CompilerWrapper.post_compile:0 2025-03-17T18:45:18.4071080Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/associative_scan.py::associative_scan:0, line 128 <- wrt source file 2025-03-17T18:45:18.4073799Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/associative_scan.py::associative_scan:0 2025-03-17T18:45:18.4076731Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/associative_scan.py::generic_associative_scan:0, line 270 <- wrt source file 2025-03-17T18:45:18.4079606Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/associative_scan.py::generic_associative_scan:0 2025-03-17T18:45:18.4082083Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/cond.py::cond:0, line 110 <- wrt source file 2025-03-17T18:45:18.4084321Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/cond.py::cond:0 2025-03-17T18:45:18.4086711Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/flat_apply.py::FlatApply.__call__:0, line 80 <- wrt source file 2025-03-17T18:45:18.4089310Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/flat_apply.py::FlatApply.__call__:0 2025-03-17T18:45:18.4091687Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/scan.py::scan:0, line 94 <- wrt source file 2025-03-17T18:45:18.4093929Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_higher_order_ops/scan.py::scan:0 2025-03-17T18:45:18.4096432Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/cpp_builder.py::get_name_and_dir_from_output_file_path:0, line 1350 <- wrt source file 2025-03-17T18:45:18.4099270Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_inductor/cpp_builder.py::get_name_and_dir_from_output_file_path:0 2025-03-17T18:45:18.4101870Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::custom_op:0, line 99 <- wrt source file 2025-03-17T18:45:18.4386642Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::custom_op:0 2025-03-17T18:45:18.4389183Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.set_kernel_enabled:0, line 230 <- wrt source file 2025-03-17T18:45:18.4469332Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.set_kernel_enabled:0 2025-03-17T18:45:18.4472240Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_kernel:0, line 299 <- wrt source file 2025-03-17T18:45:18.4474926Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_kernel:0 2025-03-17T18:45:18.4477546Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_fake:0, line 405 <- wrt source file 2025-03-17T18:45:18.4544873Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_fake:0 2025-03-17T18:45:18.4547562Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_autograd:0, line 532 <- wrt source file 2025-03-17T18:45:18.4699827Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_autograd:0 2025-03-17T18:45:18.4702476Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_vmap:0, line 704 <- wrt source file 2025-03-17T18:45:18.4855089Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_vmap:0 2025-03-17T18:45:18.4857753Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_autocast:0, line 790 <- wrt source file 2025-03-17T18:45:18.4860686Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/custom_ops.py::CustomOpDef.register_autocast:0 2025-03-17T18:45:18.4863317Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/fake_class_registry.py::register_fake_class:0, line 197 <- wrt source file 2025-03-17T18:45:18.4865986Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/fake_class_registry.py::register_fake_class:0 2025-03-17T18:45:18.4868647Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/fake_impl.py::FakeImplCtx.new_dynamic_size:0, line 161 <- wrt source file 2025-03-17T18:45:18.4925671Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/fake_impl.py::FakeImplCtx.new_dynamic_size:0 2025-03-17T18:45:18.4927049Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/infer_schema.py::infer_schema:0, line 51 <- wrt source file 2025-03-17T18:45:18.4931837Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/infer_schema.py::infer_schema:0 2025-03-17T18:45:18.4933284Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_logging/_internal.py::set_logs:0, line 442 <- wrt source file 2025-03-17T18:45:18.4934635Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_logging/_internal.py::set_logs:0 2025-03-17T18:45:18.4936007Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_equal:0, line 170 <- wrt source file 2025-03-17T18:45:18.4972486Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_equal:0 2025-03-17T18:45:18.4973837Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::print_assert_equal:0, line 301 <- wrt source file 2025-03-17T18:45:18.4975208Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::print_assert_equal:0 2025-03-17T18:45:18.4976525Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_less:0, line 992 <- wrt source file 2025-03-17T18:45:18.5020138Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_less:0 2025-03-17T18:45:18.5021491Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_string_equal:0, line 1057 <- wrt source file 2025-03-17T18:45:18.5022878Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_string_equal:0 2025-03-17T18:45:18.5024299Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_allclose:0, line 1278 <- wrt source file 2025-03-17T18:45:18.5038627Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_allclose:0 2025-03-17T18:45:18.5040024Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_almost_equal_nulp:0, line 1344 <- wrt source file 2025-03-17T18:45:18.5042652Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_almost_equal_nulp:0 2025-03-17T18:45:18.5044061Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_max_ulp:0, line 1407 <- wrt source file 2025-03-17T18:45:18.5047439Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_array_max_ulp:0 2025-03-17T18:45:18.5048954Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::nulp_diff:0, line 1452 <- wrt source file 2025-03-17T18:45:18.5050241Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::nulp_diff:0 2025-03-17T18:45:18.5051500Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_warns:0, line 1562 <- wrt source file 2025-03-17T18:45:18.5053587Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py::assert_warns:0 2025-03-17T18:45:18.5055160Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/context.py::TorchRefsMode:0, line 86 <- wrt source file 2025-03-17T18:45:18.5057002Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_prims/context.py::TorchRefsMode:0 2025-03-17T18:45:18.5059307Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/amp/grad_scaler.py::GradScaler:0, line 64 <- wrt source file 2025-03-17T18:45:18.5060693Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/amp/grad_scaler.py::GradScaler:0 2025-03-17T18:45:18.5062063Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/qat/modules/linear_relu.py::LinearReLU:0, line 23 <- wrt source file 2025-03-17T18:45:18.5063594Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/qat/modules/linear_relu.py::LinearReLU:0 2025-03-17T18:45:18.5065291Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/dynamic/modules/linear_relu.py::LinearReLU:0, line 22 <- wrt source file 2025-03-17T18:45:18.5067096Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/dynamic/modules/linear_relu.py::LinearReLU:0 2025-03-17T18:45:18.5068721Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearReLU:0, line 25 <- wrt source file 2025-03-17T18:45:18.5070340Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearReLU:0 2025-03-17T18:45:18.5072001Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearLeakyReLU:0, line 66 <- wrt source file 2025-03-17T18:45:18.5073673Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearLeakyReLU:0 2025-03-17T18:45:18.5075279Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearTanh:0, line 140 <- wrt source file 2025-03-17T18:45:18.5076892Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/intrinsic/quantized/modules/linear_relu.py::LinearTanh:0 2025-03-17T18:45:18.5078348Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTMCell:0, line 30 <- wrt source file 2025-03-17T18:45:18.5080152Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTMCell:0 2025-03-17T18:45:18.5081885Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTM:0, line 410 <- wrt source file 2025-03-17T18:45:18.5111164Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantizable/modules/rnn.py::LSTM:0 2025-03-17T18:45:18.5113444Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv1d:0, line 210 <- wrt source file 2025-03-17T18:45:18.5115423Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv1d:0 2025-03-17T18:45:18.5116730Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv2d:0, line 282 <- wrt source file 2025-03-17T18:45:18.5118065Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv2d:0 2025-03-17T18:45:18.5119383Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv3d:0, line 358 <- wrt source file 2025-03-17T18:45:18.5120836Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/functional.py::conv3d:0 2025-03-17T18:45:18.5122155Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/__init__.py::Quantize:0, line 95 <- wrt source file 2025-03-17T18:45:18.5123548Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/__init__.py::Quantize:0 2025-03-17T18:45:18.5125271Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/__init__.py::DeQuantize:0, line 145 <- wrt source file 2025-03-17T18:45:18.5127176Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/__init__.py::DeQuantize:0 2025-03-17T18:45:18.5128955Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv1d:0, line 43 <- wrt source file 2025-03-17T18:45:18.5130778Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv1d:0 2025-03-17T18:45:18.5132734Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv2d:0, line 124 <- wrt source file 2025-03-17T18:45:18.5134586Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv2d:0 2025-03-17T18:45:18.5136082Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv3d:0, line 208 <- wrt source file 2025-03-17T18:45:18.5137825Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::Conv3d:0 2025-03-17T18:45:18.5139611Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose1d:0, line 294 <- wrt source file 2025-03-17T18:45:18.5141630Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose1d:0 2025-03-17T18:45:18.5143585Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose2d:0, line 376 <- wrt source file 2025-03-17T18:45:18.5145293Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose2d:0 2025-03-17T18:45:18.5146926Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose3d:0, line 458 <- wrt source file 2025-03-17T18:45:18.5148495Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/conv.py::ConvTranspose3d:0 2025-03-17T18:45:18.5150669Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/linear.py::Linear:0, line 30 <- wrt source file 2025-03-17T18:45:18.5153541Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/linear.py::Linear:0 2025-03-17T18:45:18.5156142Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTM:0, line 516 <- wrt source file 2025-03-17T18:45:18.5158762Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTM:0 2025-03-17T18:45:18.5161318Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRU:0, line 801 <- wrt source file 2025-03-17T18:45:18.5163912Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRU:0 2025-03-17T18:45:18.5166508Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::RNNCell:0, line 1203 <- wrt source file 2025-03-17T18:45:18.5169208Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::RNNCell:0 2025-03-17T18:45:18.5171965Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTMCell:0, line 1269 <- wrt source file 2025-03-17T18:45:18.5175091Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::LSTMCell:0 2025-03-17T18:45:18.5178267Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRUCell:0, line 1322 <- wrt source file 2025-03-17T18:45:18.5181246Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/dynamic/modules/rnn.py::GRUCell:0 2025-03-17T18:45:18.5184237Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/activation.py::ReLU6:0, line 36 <- wrt source file 2025-03-17T18:45:18.5187347Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/activation.py::ReLU6:0 2025-03-17T18:45:18.5190133Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv2d:0, line 505 <- wrt source file 2025-03-17T18:45:18.5192681Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv2d:0 2025-03-17T18:45:18.5195106Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv3d:0, line 634 <- wrt source file 2025-03-17T18:45:18.5197598Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::Conv3d:0 2025-03-17T18:45:18.5200146Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose1d:0, line 890 <- wrt source file 2025-03-17T18:45:18.5202842Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose1d:0 2025-03-17T18:45:18.5205498Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose2d:0, line 1012 <- wrt source file 2025-03-17T18:45:18.5208203Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose2d:0 2025-03-17T18:45:18.5210846Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose3d:0, line 1138 <- wrt source file 2025-03-17T18:45:18.5213548Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py::ConvTranspose3d:0 2025-03-17T18:45:18.5216322Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::Embedding:0, line 112 <- wrt source file 2025-03-17T18:45:18.5219094Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::Embedding:0 2025-03-17T18:45:18.5221828Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::EmbeddingBag:0, line 275 <- wrt source file 2025-03-17T18:45:18.5224656Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/embedding_ops.py::EmbeddingBag:0 2025-03-17T18:45:18.5227577Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::FloatFunctional:0, line 23 <- wrt source file 2025-03-17T18:45:18.5230604Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::FloatFunctional:0 2025-03-17T18:45:18.5233523Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::QFunctional:0, line 176 <- wrt source file 2025-03-17T18:45:18.5236427Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/functional_modules.py::QFunctional:0 2025-03-17T18:45:18.5239224Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/linear.py::Linear:0, line 138 <- wrt source file 2025-03-17T18:45:18.5241765Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/linear.py::Linear:0 2025-03-17T18:45:18.5244886Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/activation_sparsifier/activation_sparsifier.py::ActivationSparsifier:0, line 62 <- wrt source file 2025-03-17T18:45:18.5248477Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/activation_sparsifier/activation_sparsifier.py::ActivationSparsifier:0 2025-03-17T18:45:18.5252006Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/data_scheduler/base_data_scheduler.py::BaseDataScheduler.get_schedule_param:0, line 98 <- wrt source file 2025-03-17T18:45:18.5255699Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/data_scheduler/base_data_scheduler.py::BaseDataScheduler.get_schedule_param:0 2025-03-17T18:45:18.5259140Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/data_sparsifier/base_data_sparsifier.py::BaseDataSparsifier:0, line 55 <- wrt source file 2025-03-17T18:45:18.5262508Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/_experimental/data_sparsifier/base_data_sparsifier.py::BaseDataSparsifier:0 2025-03-17T18:45:18.5265486Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/scheduler/lambda_scheduler.py::LambdaSL:0, line 22 <- wrt source file 2025-03-17T18:45:18.5268257Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/scheduler/lambda_scheduler.py::LambdaSL:0 2025-03-17T18:45:18.5270998Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier:0, line 47 <- wrt source file 2025-03-17T18:45:18.5273848Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py::BaseSparsifier:0 2025-03-17T18:45:18.5276516Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuse_modules.py::fuse_modules:0, line 176 <- wrt source file 2025-03-17T18:45:18.5279201Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuse_modules.py::fuse_modules:0 2025-03-17T18:45:18.5281826Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn:0, line 31 <- wrt source file 2025-03-17T18:45:18.5284601Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn:0 2025-03-17T18:45:18.5287352Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn_relu:0, line 76 <- wrt source file 2025-03-17T18:45:18.5290206Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_conv_bn_relu:0 2025-03-17T18:45:18.5292987Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_linear_bn:0, line 130 <- wrt source file 2025-03-17T18:45:18.5295790Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_linear_bn:0 2025-03-17T18:45:18.5298613Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_convtranspose_bn:0, line 163 <- wrt source file 2025-03-17T18:45:18.5301566Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fuser_method_mappings.py::fuse_convtranspose_bn:0 2025-03-17T18:45:18.5304207Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/observer.py::_with_args:0, line 108 <- wrt source file 2025-03-17T18:45:18.5306752Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/observer.py::_with_args:0 2025-03-17T18:45:18.5309274Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/observer.py::_with_callable_args:0, line 130 <- wrt source file 2025-03-17T18:45:18.5311912Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/observer.py::_with_callable_args:0 2025-03-17T18:45:18.5314408Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::fuse_fx:0, line 218 <- wrt source file 2025-03-17T18:45:18.5316903Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::fuse_fx:0 2025-03-17T18:45:18.5319368Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::prepare_fx:0, line 286 <- wrt source file 2025-03-17T18:45:18.5321896Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::prepare_fx:0 2025-03-17T18:45:18.5324416Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::prepare_qat_fx:0, line 424 <- wrt source file 2025-03-17T18:45:18.5327033Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::prepare_qat_fx:0 2025-03-17T18:45:18.5329560Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::convert_fx:0, line 598 <- wrt source file 2025-03-17T18:45:18.5332081Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::convert_fx:0 2025-03-17T18:45:18.5334685Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::convert_to_reference_fx:0, line 658 <- wrt source file 2025-03-17T18:45:18.5337624Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::convert_to_reference_fx:0 2025-03-17T18:45:18.5340594Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::_convert_to_reference_decomposed_fx:0, line 710 <- wrt source file 2025-03-17T18:45:18.5343612Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_fx.py::_convert_to_reference_decomposed_fx:0 2025-03-17T18:45:18.5346376Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::prepare_pt2e:0, line 47 <- wrt source file 2025-03-17T18:45:18.5349046Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::prepare_pt2e:0 2025-03-17T18:45:18.5351657Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::prepare_qat_pt2e:0, line 125 <- wrt source file 2025-03-17T18:45:18.5354350Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::prepare_qat_pt2e:0 2025-03-17T18:45:18.5356955Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::convert_pt2e:0, line 222 <- wrt source file 2025-03-17T18:45:18.5359561Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/quantize_pt2e.py::convert_pt2e:0 2025-03-17T18:45:18.5362097Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::get_combined_dict:0, line 145 <- wrt source file 2025-03-17T18:45:18.5364667Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::get_combined_dict:0 2025-03-17T18:45:18.5367196Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_path_of_module:0, line 517 <- wrt source file 2025-03-17T18:45:18.5369751Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_path_of_module:0 2025-03-17T18:45:18.5372293Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_signature_locals:0, line 539 <- wrt source file 2025-03-17T18:45:18.5374948Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_signature_locals:0 2025-03-17T18:45:18.5377474Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_default_kwargs:0, line 553 <- wrt source file 2025-03-17T18:45:18.5380039Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_default_kwargs:0 2025-03-17T18:45:18.5382549Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_normalize_kwargs:0, line 575 <- wrt source file 2025-03-17T18:45:18.5385119Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_normalize_kwargs:0 2025-03-17T18:45:18.5387661Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_num_pos_args:0, line 702 <- wrt source file 2025-03-17T18:45:18.5390183Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/utils.py::_get_num_pos_args:0 2025-03-17T18:45:18.5392900Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/backend_config/onednn.py::_fuse_linear_bn_leaky_relu:0, line 85 <- wrt source file 2025-03-17T18:45:18.5395919Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/backend_config/onednn.py::_fuse_linear_bn_leaky_relu:0 2025-03-17T18:45:18.5398897Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report.py::ModelReport:0, line 84 <- wrt source file 2025-03-17T18:45:18.5401788Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report.py::ModelReport:0 2025-03-17T18:45:18.5404673Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py::_get_reduction_params:0, line 102 <- wrt source file 2025-03-17T18:45:18.5407684Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py::_get_reduction_params:0 2025-03-17T18:45:18.5410624Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py::_register_custom_op:0, line 148 <- wrt source file 2025-03-17T18:45:18.5413604Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/_affine_quantization.py::_register_custom_op:0 2025-03-17T18:45:18.5416467Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/prepare.py::_get_edge_or_node_to_group_id:0, line 188 <- wrt source file 2025-03-17T18:45:18.5419379Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/prepare.py::_get_edge_or_node_to_group_id:0 2025-03-17T18:45:18.5422335Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/utils.py::_replace_literals_with_new_placeholders:0, line 430 <- wrt source file 2025-03-17T18:45:18.5425436Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/pt2e/utils.py::_replace_literals_with_new_placeholders:0 2025-03-17T18:45:18.5428175Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/anomaly_mode.py::detect_anomaly:0, line 27 <- wrt source file 2025-03-17T18:45:18.5430631Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/anomaly_mode.py::detect_anomaly:0 2025-03-17T18:45:18.5432949Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::make_dual:0, line 83 <- wrt source file 2025-03-17T18:45:18.5435270Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::make_dual:0 2025-03-17T18:45:18.5437770Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::unpack_dual:0, line 153 <- wrt source file 2025-03-17T18:45:18.5440126Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::unpack_dual:0 2025-03-17T18:45:18.5442426Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::dual_level:0, line 189 <- wrt source file 2025-03-17T18:45:18.5444738Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/forward_ad.py::dual_level:0 2025-03-17T18:45:18.5447200Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.save_for_backward:0, line 66 <- wrt source file 2025-03-17T18:45:18.5449904Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.save_for_backward:0 2025-03-17T18:45:18.5452543Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.save_for_forward:0, line 109 <- wrt source file 2025-03-17T18:45:18.5455209Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.save_for_forward:0 2025-03-17T18:45:18.5457923Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.mark_dirty:0, line 160 <- wrt source file 2025-03-17T18:45:18.5460484Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.mark_dirty:0 2025-03-17T18:45:18.5463149Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.mark_non_differentiable:0, line 207 <- wrt source file 2025-03-17T18:45:18.5465979Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.mark_non_differentiable:0 2025-03-17T18:45:18.5468797Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.set_materialize_grads:0, line 236 <- wrt source file 2025-03-17T18:45:18.5471580Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::FunctionCtx.set_materialize_grads:0 2025-03-17T18:45:18.5474070Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::Function:0, line 479 <- wrt source file 2025-03-17T18:45:18.5476339Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/function.py::Function:0 2025-03-17T18:45:18.5478545Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::vjp:0, line 293 <- wrt source file 2025-03-17T18:45:18.5480781Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::vjp:0 2025-03-17T18:45:18.5482955Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::jvp:0, line 395 <- wrt source file 2025-03-17T18:45:18.5485236Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::jvp:0 2025-03-17T18:45:18.5487484Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::jacobian:0, line 630 <- wrt source file 2025-03-17T18:45:18.5489809Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::jacobian:0 2025-03-17T18:45:18.5492072Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::hessian:0, line 884 <- wrt source file 2025-03-17T18:45:18.5494425Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::hessian:0 2025-03-17T18:45:18.5496643Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::vhp:0, line 1000 <- wrt source file 2025-03-17T18:45:18.5498876Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::vhp:0 2025-03-17T18:45:18.5501059Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::hvp:0, line 1099 <- wrt source file 2025-03-17T18:45:18.5503291Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/functional.py::hvp:0 2025-03-17T18:45:18.5505458Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::no_grad:0, line 50 <- wrt source file 2025-03-17T18:45:18.5507750Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::no_grad:0 2025-03-17T18:45:18.5510001Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::enable_grad:0, line 108 <- wrt source file 2025-03-17T18:45:18.5512328Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::enable_grad:0 2025-03-17T18:45:18.5514656Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::set_grad_enabled:0, line 166 <- wrt source file 2025-03-17T18:45:18.5517139Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::set_grad_enabled:0 2025-03-17T18:45:18.5519509Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::inference_mode:0, line 232 <- wrt source file 2025-03-17T18:45:18.5521917Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/grad_mode.py::inference_mode:0 2025-03-17T18:45:18.5524178Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.name:0, line 53 <- wrt source file 2025-03-17T18:45:18.5526381Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.name:0 2025-03-17T18:45:18.5528654Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.register_hook:0, line 110 <- wrt source file 2025-03-17T18:45:18.5531057Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.register_hook:0 2025-03-17T18:45:18.5533445Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.register_prehook:0, line 147 <- wrt source file 2025-03-17T18:45:18.5535915Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::Node.register_prehook:0 2025-03-17T18:45:18.5538430Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::saved_tensors_hooks:0, line 271 <- wrt source file 2025-03-17T18:45:18.5540899Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::saved_tensors_hooks:0 2025-03-17T18:45:18.5543177Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::save_on_cpu:0, line 336 <- wrt source file 2025-03-17T18:45:18.5545422Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::save_on_cpu:0 2025-03-17T18:45:18.5547839Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::disable_saved_tensors_hooks:0, line 393 <- wrt source file 2025-03-17T18:45:18.5550529Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::disable_saved_tensors_hooks:0 2025-03-17T18:45:18.5553026Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::register_multi_grad_hook:0, line 470 <- wrt source file 2025-03-17T18:45:18.5555525Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::register_multi_grad_hook:0 2025-03-17T18:45:18.5558067Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::allow_mutation_on_saved_tensors:0, line 736 <- wrt source file 2025-03-17T18:45:18.5560706Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/graph.py::allow_mutation_on_saved_tensors:0 2025-03-17T18:45:18.5563119Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::profile:0, line 178 <- wrt source file 2025-03-17T18:45:18.5565380Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::profile:0 2025-03-17T18:45:18.5567683Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::record_function:0, line 733 <- wrt source file 2025-03-17T18:45:18.5570094Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::record_function:0 2025-03-17T18:45:18.5572382Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::emit_itt:0, line 867 <- wrt source file 2025-03-17T18:45:18.5574727Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::emit_itt:0 2025-03-17T18:45:18.5576961Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::emit_nvtx:0, line 940 <- wrt source file 2025-03-17T18:45:18.5579232Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/autograd/profiler.py::emit_nvtx:0 2025-03-17T18:45:18.5581452Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::gds_register_buffer:0, line 42 <- wrt source file 2025-03-17T18:45:18.5583717Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::gds_register_buffer:0 2025-03-17T18:45:18.5585955Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::gds_deregister_buffer:0, line 58 <- wrt source file 2025-03-17T18:45:18.5588606Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::gds_deregister_buffer:0 2025-03-17T18:45:18.5590738Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::GdsFile:0, line 85 <- wrt source file 2025-03-17T18:45:18.5592795Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/gds.py::GdsFile:0 2025-03-17T18:45:18.5594939Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:0, line 114 <- wrt source file 2025-03-17T18:45:18.5597231Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:0 2025-03-17T18:45:18.5599525Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:1, line 125 <- wrt source file 2025-03-17T18:45:18.5601827Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:1 2025-03-17T18:45:18.5604085Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:2, line 138 <- wrt source file 2025-03-17T18:45:18.5606376Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_jit_fn:2 2025-03-17T18:45:18.5608803Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_multi_output_jit_fn:0, line 171 <- wrt source file 2025-03-17T18:45:18.5611363Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/jiterator.py::_create_multi_output_jit_fn:0 2025-03-17T18:45:18.5613677Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/profiler.py::profile:0, line 75 <- wrt source file 2025-03-17T18:45:18.5616191Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/profiler.py::profile:0 2025-03-17T18:45:18.5618844Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::DeviceMesh:0, line 415 <- wrt source file 2025-03-17T18:45:18.5621636Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::DeviceMesh:0 2025-03-17T18:45:18.5624183Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::DeviceMesh.get_local_rank:0, line 931 <- wrt source file 2025-03-17T18:45:18.5626957Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::DeviceMesh.get_local_rank:0 2025-03-17T18:45:18.5629570Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::init_device_mesh:0, line 1013 <- wrt source file 2025-03-17T18:45:18.5632214Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py::init_device_mesh:0 2025-03-17T18:45:18.5634830Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::_coalescing_manager:0, line 2523 <- wrt source file 2025-03-17T18:45:18.5637698Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::_coalescing_manager:0 2025-03-17T18:45:18.5640363Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_object:0, line 3035 <- wrt source file 2025-03-17T18:45:18.5643047Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_object:0 2025-03-17T18:45:18.5645682Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::send_object_list:0, line 3256 <- wrt source file 2025-03-17T18:45:18.5648347Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::send_object_list:0 2025-03-17T18:45:18.5650965Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::recv_object_list:0, line 3354 <- wrt source file 2025-03-17T18:45:18.5653642Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::recv_object_list:0 2025-03-17T18:45:18.5656331Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::broadcast_object_list:0, line 3464 <- wrt source file 2025-03-17T18:45:18.5659182Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::broadcast_object_list:0 2025-03-17T18:45:18.5661895Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::scatter_object_list:0, line 3583 <- wrt source file 2025-03-17T18:45:18.5664636Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::scatter_object_list:0 2025-03-17T18:45:18.5667407Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_into_tensor:0, line 3792 <- wrt source file 2025-03-17T18:45:18.5670248Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_into_tensor:0 2025-03-17T18:45:18.5672980Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_coalesced:0, line 3934 <- wrt source file 2025-03-17T18:45:18.5675740Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::all_gather_coalesced:0 2025-03-17T18:45:18.5678316Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::gather:0, line 4040 <- wrt source file 2025-03-17T18:45:18.5692859Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::gather:0 2025-03-17T18:45:18.5695527Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::scatter:0, line 4126 <- wrt source file 2025-03-17T18:45:18.5698063Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::scatter:0 2025-03-17T18:45:18.5700665Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::reduce_scatter_tensor:0, line 4265 <- wrt source file 2025-03-17T18:45:18.5703468Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::reduce_scatter_tensor:0 2025-03-17T18:45:18.5706337Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::monitored_barrier:0, line 4732 <- wrt source file 2025-03-17T18:45:18.5709182Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::monitored_barrier:0 2025-03-17T18:45:18.5711799Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::new_subgroups:0, line 5310 <- wrt source file 2025-03-17T18:45:18.5714436Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::new_subgroups:0 2025-03-17T18:45:18.5717170Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::new_subgroups_by_enumeration:0, line 5412 <- wrt source file 2025-03-17T18:45:18.5720101Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py::new_subgroups_by_enumeration:0 2025-03-17T18:45:18.5722570Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/run.py::__doc__:0, line 57 <- wrt source file 2025-03-17T18:45:18.5724745Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/run.py::__doc__:0 2025-03-17T18:45:18.5727048Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/autograd/__init__.py::context:0, line 39 <- wrt source file 2025-03-17T18:45:18.5729527Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/autograd/__init__.py::context:0 2025-03-17T18:45:18.5732227Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/checkpoint_activation.py::checkpoint:0, line 53 <- wrt source file 2025-03-17T18:45:18.5735134Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/checkpoint_activation.py::checkpoint:0 2025-03-17T18:45:18.5737929Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/contract.py::contract:0, line 66 <- wrt source file 2025-03-17T18:45:18.5740585Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/contract.py::contract:0 2025-03-17T18:45:18.5743143Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/replicate.py::replicate:0, line 190 <- wrt source file 2025-03-17T18:45:18.5745788Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_composable/replicate.py::replicate:0 2025-03-17T18:45:18.5748675Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_optim/__init__.py::named_params_with_sharded_tensor:0, line 31 <- wrt source file 2025-03-17T18:45:18.5751815Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_optim/__init__.py::named_params_with_sharded_tensor:0 2025-03-17T18:45:18.5754806Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::custom_sharded_op_impl:0, line 457 <- wrt source file 2025-03-17T18:45:18.5757799Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py::custom_sharded_op_impl:0 2025-03-17T18:45:18.5760711Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/_ops/_common.py::_sharded_op_common:0, line 18 <- wrt source file 2025-03-17T18:45:18.5763703Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/_ops/_common.py::_sharded_op_common:0 2025-03-17T18:45:18.5766592Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_tools/memory_tracker.py::MemoryTracker:0, line 55 <- wrt source file 2025-03-17T18:45:18.5769297Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_tools/memory_tracker.py::MemoryTracker:0 2025-03-17T18:45:18.5771814Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/join.py::Join:0, line 141 <- wrt source file 2025-03-17T18:45:18.5774229Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/join.py::Join:0 2025-03-17T18:45:18.5776943Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/__init__.py::register_ddp_comm_hook:0, line 107 <- wrt source file 2025-03-17T18:45:18.5780077Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/__init__.py::register_ddp_comm_hook:0 2025-03-17T18:45:18.5783094Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/debugging_hooks.py::noop_hook:0, line 23 <- wrt source file 2025-03-17T18:45:18.5786139Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/debugging_hooks.py::noop_hook:0 2025-03-17T18:45:18.5789209Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::allreduce_hook:0, line 49 <- wrt source file 2025-03-17T18:45:18.5792364Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::allreduce_hook:0 2025-03-17T18:45:18.5795435Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_hook:0, line 104 <- wrt source file 2025-03-17T18:45:18.5798587Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_hook:0 2025-03-17T18:45:18.5801729Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_hook:0, line 125 <- wrt source file 2025-03-17T18:45:18.5804886Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_hook:0 2025-03-17T18:45:18.5808030Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_wrapper:0, line 143 <- wrt source file 2025-03-17T18:45:18.5811262Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::fp16_compress_wrapper:0 2025-03-17T18:45:18.5814427Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_wrapper:0, line 182 <- wrt source file 2025-03-17T18:45:18.5817652Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/default_hooks.py::bf16_compress_wrapper:0 2025-03-17T18:45:18.5820809Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::batched_powerSGD_hook:0, line 707 <- wrt source file 2025-03-17T18:45:18.5824035Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py::batched_powerSGD_hook:0 2025-03-17T18:45:18.5827447Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_pertensor_hook:0, line 64 <- wrt source file 2025-03-17T18:45:18.5830935Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_pertensor_hook:0 2025-03-17T18:45:18.5834370Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_perchannel_hook:0, line 145 <- wrt source file 2025-03-17T18:45:18.5838000Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/quantization_hooks.py::quantization_perchannel_hook:0 2025-03-17T18:45:18.5841133Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py::_patch_model_state_dict:0, line 1401 <- wrt source file 2025-03-17T18:45:18.5844072Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py::_patch_model_state_dict:0 2025-03-17T18:45:18.5846997Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py::_patch_optimizer_state_dict:0, line 1460 <- wrt source file 2025-03-17T18:45:18.5850011Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py::_patch_optimizer_state_dict:0 2025-03-17T18:45:18.5853054Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/rendezvous/api.py::RendezvousHandler.shutdown:0, line 231 <- wrt source file 2025-03-17T18:45:18.5856195Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/rendezvous/api.py::RendezvousHandler.shutdown:0 2025-03-17T18:45:18.5859098Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/utils/distributed.py::get_free_port:0, line 141 <- wrt source file 2025-03-17T18:45:18.5861935Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/utils/distributed.py::get_free_port:0 2025-03-17T18:45:18.5864569Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py::StateDictType:0, line 262 <- wrt source file 2025-03-17T18:45:18.5867074Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py::StateDictType:0 2025-03-17T18:45:18.5869882Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel:0, line 130 <- wrt source file 2025-03-17T18:45:18.5873069Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel:0 2025-03-17T18:45:18.5876469Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.shard_full_optim_state_dict:0, line 1495 <- wrt source file 2025-03-17T18:45:18.5880162Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.shard_full_optim_state_dict:0 2025-03-17T18:45:18.5883829Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.scatter_full_optim_state_dict:0, line 1615 <- wrt source file 2025-03-17T18:45:18.5887592Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.scatter_full_optim_state_dict:0 2025-03-17T18:45:18.5891309Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.rekey_optim_state_dict:0, line 1700 <- wrt source file 2025-03-17T18:45:18.5894929Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py::FullyShardedDataParallel.rekey_optim_state_dict:0 2025-03-17T18:45:18.5898086Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/sharded_grad_scaler.py::ShardedGradScaler:0, line 54 <- wrt source file 2025-03-17T18:45:18.5900963Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/sharded_grad_scaler.py::ShardedGradScaler:0 2025-03-17T18:45:18.5903580Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/wrap.py::CustomPolicy:0, line 224 <- wrt source file 2025-03-17T18:45:18.5906038Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/wrap.py::CustomPolicy:0 2025-03-17T18:45:18.5908576Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/functional.py::_all_gather_base:0, line 134 <- wrt source file 2025-03-17T18:45:18.5911181Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/functional.py::_all_gather_base:0 2025-03-17T18:45:18.5914044Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_apply_optimizer_in_backward:0, line 43 <- wrt source file 2025-03-17T18:45:18.5917332Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_apply_optimizer_in_backward:0 2025-03-17T18:45:18.5920494Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_get_in_backward_optimizers:0, line 114 <- wrt source file 2025-03-17T18:45:18.5923705Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/apply_optimizer_in_backward.py::_get_in_backward_optimizers:0 2025-03-17T18:45:18.5926673Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/named_optimizer.py::_NamedOptimizer:0, line 44 <- wrt source file 2025-03-17T18:45:18.5929850Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/named_optimizer.py::_NamedOptimizer:0 2025-03-17T18:45:18.5933121Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/utils.py::register_functional_optim:0, line 37 <- wrt source file 2025-03-17T18:45:18.5936264Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/utils.py::register_functional_optim:0 2025-03-17T18:45:18.5939107Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/_IR.py::pipe_split:0, line 333 <- wrt source file 2025-03-17T18:45:18.5941704Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/_IR.py::pipe_split:0 2025-03-17T18:45:18.5944547Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py::TensorChunkSpec.from_tuple:0, line 82 <- wrt source file 2025-03-17T18:45:18.5947747Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py::TensorChunkSpec.from_tuple:0 2025-03-17T18:45:18.5950818Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py::TensorChunkSpec.from_dict:0, line 101 <- wrt source file 2025-03-17T18:45:18.5954126Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py::TensorChunkSpec.from_dict:0 2025-03-17T18:45:18.5956818Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::_wait_all:0, line 175 <- wrt source file 2025-03-17T18:45:18.5959212Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::_wait_all:0 2025-03-17T18:45:18.5961575Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::shutdown:0, line 346 <- wrt source file 2025-03-17T18:45:18.5963932Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::shutdown:0 2025-03-17T18:45:18.5966188Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::remote:0, line 605 <- wrt source file 2025-03-17T18:45:18.5968467Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::remote:0 2025-03-17T18:45:18.5970704Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::rpc_sync:0, line 785 <- wrt source file 2025-03-17T18:45:18.5973000Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::rpc_sync:0 2025-03-17T18:45:18.5975273Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::rpc_async:0, line 877 <- wrt source file 2025-03-17T18:45:18.5977677Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/api.py::rpc_async:0 2025-03-17T18:45:18.5980033Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_api.py::_shard_tensor:0, line 813 <- wrt source file 2025-03-17T18:45:18.5982528Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_api.py::_shard_tensor:0 2025-03-17T18:45:18.5985268Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_random.py::OffsetBasedRNGTracker._set_pre_op_offset:0, line 251 <- wrt source file 2025-03-17T18:45:18.5988475Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_random.py::OffsetBasedRNGTracker._set_pre_op_offset:0 2025-03-17T18:45:18.5991348Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_ops/_common_rules.py::pointwise_rule:0, line 235 <- wrt source file 2025-03-17T18:45:18.5994142Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/_ops/_common_rules.py::pointwise_rule:0 2025-03-17T18:45:18.5996922Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/api.py::parallelize_module:0, line 57 <- wrt source file 2025-03-17T18:45:18.5999733Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/api.py::parallelize_module:0 2025-03-17T18:45:18.6002522Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/ddp.py::_pre_dp_module_transform:0, line 88 <- wrt source file 2025-03-17T18:45:18.6005422Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/ddp.py::_pre_dp_module_transform:0 2025-03-17T18:45:18.6008194Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/loss.py::loss_parallel:0, line 55 <- wrt source file 2025-03-17T18:45:18.6010921Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/loss.py::loss_parallel:0 2025-03-17T18:45:18.6013727Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::ColwiseParallel:0, line 63 <- wrt source file 2025-03-17T18:45:18.6016550Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::ColwiseParallel:0 2025-03-17T18:45:18.6019320Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::RowwiseParallel:0, line 189 <- wrt source file 2025-03-17T18:45:18.6022151Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::RowwiseParallel:0 2025-03-17T18:45:18.6024933Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::SequenceParallel:0, line 333 <- wrt source file 2025-03-17T18:45:18.6027847Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py::SequenceParallel:0 2025-03-17T18:45:18.6030421Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/bernoulli.py::Bernoulli:0, line 28 <- wrt source file 2025-03-17T18:45:18.6032855Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/bernoulli.py::Bernoulli:0 2025-03-17T18:45:18.6035128Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/beta.py::Beta:0, line 19 <- wrt source file 2025-03-17T18:45:18.6037473Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/beta.py::Beta:0 2025-03-17T18:45:18.6039820Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/binomial.py::Binomial:0, line 29 <- wrt source file 2025-03-17T18:45:18.6042215Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/binomial.py::Binomial:0 2025-03-17T18:45:18.6044638Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/categorical.py::Categorical:0, line 40 <- wrt source file 2025-03-17T18:45:18.6047175Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/categorical.py::Categorical:0 2025-03-17T18:45:18.6049574Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/cauchy.py::Cauchy:0, line 22 <- wrt source file 2025-03-17T18:45:18.6051872Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/cauchy.py::Cauchy:0 2025-03-17T18:45:18.6054086Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/chi2.py::Chi2:0, line 16 <- wrt source file 2025-03-17T18:45:18.6056309Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/chi2.py::Chi2:0 2025-03-17T18:45:18.6058664Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/constraints.py::is_dependent:0, line 164 <- wrt source file 2025-03-17T18:45:18.6061225Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/constraints.py::is_dependent:0 2025-03-17T18:45:18.6063809Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/constraints.py::_DependentProperty:0, line 185 <- wrt source file 2025-03-17T18:45:18.6066566Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/constraints.py::_DependentProperty:0 2025-03-17T18:45:18.6069335Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/continuous_bernoulli.py::ContinuousBernoulli:0, line 34 <- wrt source file 2025-03-17T18:45:18.6072358Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/continuous_bernoulli.py::ContinuousBernoulli:0 2025-03-17T18:45:18.6074970Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/dirichlet.py::Dirichlet:0, line 40 <- wrt source file 2025-03-17T18:45:18.6077424Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/dirichlet.py::Dirichlet:0 2025-03-17T18:45:18.6079863Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/exponential.py::Exponential:0, line 18 <- wrt source file 2025-03-17T18:45:18.6082466Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/exponential.py::Exponential:0 2025-03-17T18:45:18.6085045Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/fishersnedecor.py::FisherSnedecor:0, line 19 <- wrt source file 2025-03-17T18:45:18.6087731Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/fishersnedecor.py::FisherSnedecor:0 2025-03-17T18:45:18.6090124Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/gamma.py::Gamma:0, line 22 <- wrt source file 2025-03-17T18:45:18.6092374Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/gamma.py::Gamma:0 2025-03-17T18:45:18.6094669Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/geometric.py::Geometric:0, line 34 <- wrt source file 2025-03-17T18:45:18.6097107Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/geometric.py::Geometric:0 2025-03-17T18:45:18.6099464Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/gumbel.py::Gumbel:0, line 22 <- wrt source file 2025-03-17T18:45:18.6101767Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/gumbel.py::Gumbel:0 2025-03-17T18:45:18.6104107Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/half_cauchy.py::HalfCauchy:0, line 23 <- wrt source file 2025-03-17T18:45:18.6106637Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/half_cauchy.py::HalfCauchy:0 2025-03-17T18:45:18.6109134Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/half_normal.py::HalfNormal:0, line 23 <- wrt source file 2025-03-17T18:45:18.6111606Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/half_normal.py::HalfNormal:0 2025-03-17T18:45:18.6114081Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/independent.py::Independent:0, line 23 <- wrt source file 2025-03-17T18:45:18.6116621Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/independent.py::Independent:0 2025-03-17T18:45:18.6119119Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/inverse_gamma.py::InverseGamma:0, line 22 <- wrt source file 2025-03-17T18:45:18.6121725Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/inverse_gamma.py::InverseGamma:0 2025-03-17T18:45:18.6124248Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/kumaraswamy.py::Kumaraswamy:0, line 28 <- wrt source file 2025-03-17T18:45:18.6126777Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/kumaraswamy.py::Kumaraswamy:0 2025-03-17T18:45:18.6129155Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/laplace.py::Laplace:0, line 18 <- wrt source file 2025-03-17T18:45:18.6131581Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/laplace.py::Laplace:0 2025-03-17T18:45:18.6133971Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/lkj_cholesky.py::LKJCholesky:0, line 41 <- wrt source file 2025-03-17T18:45:18.6136510Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/lkj_cholesky.py::LKJCholesky:0 2025-03-17T18:45:18.6139081Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/log_normal.py::LogNormal:0, line 21 <- wrt source file 2025-03-17T18:45:18.6141520Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/log_normal.py::LogNormal:0 2025-03-17T18:45:18.6144057Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/logistic_normal.py::LogisticNormal:0, line 26 <- wrt source file 2025-03-17T18:45:18.6146786Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/logistic_normal.py::LogisticNormal:0 2025-03-17T18:45:18.6149338Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/multinomial.py::Multinomial:0, line 36 <- wrt source file 2025-03-17T18:45:18.6151884Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/multinomial.py::Multinomial:0 2025-03-17T18:45:18.6154554Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/multivariate_normal.py::MultivariateNormal:0, line 102 <- wrt source file 2025-03-17T18:45:18.6157510Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/multivariate_normal.py::MultivariateNormal:0 2025-03-17T18:45:18.6160034Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/normal.py::Normal:0, line 21 <- wrt source file 2025-03-17T18:45:18.6162331Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/normal.py::Normal:0 2025-03-17T18:45:18.6164844Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/one_hot_categorical.py::OneHotCategorical:0, line 32 <- wrt source file 2025-03-17T18:45:18.6167716Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/one_hot_categorical.py::OneHotCategorical:0 2025-03-17T18:45:18.6170200Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/pareto.py::Pareto:0, line 20 <- wrt source file 2025-03-17T18:45:18.6172518Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/pareto.py::Pareto:0 2025-03-17T18:45:18.6174806Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/poisson.py::Poisson:0, line 23 <- wrt source file 2025-03-17T18:45:18.6177169Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/poisson.py::Poisson:0 2025-03-17T18:45:18.6179499Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/studentT.py::StudentT:0, line 21 <- wrt source file 2025-03-17T18:45:18.6181902Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/studentT.py::StudentT:0 2025-03-17T18:45:18.6184348Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::CatTransform:0, line 1046 <- wrt source file 2025-03-17T18:45:18.6186974Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::CatTransform:0 2025-03-17T18:45:18.6189510Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::StackTransform:0, line 1152 <- wrt source file 2025-03-17T18:45:18.6192196Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::StackTransform:0 2025-03-17T18:45:18.6194973Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::CumulativeDistributionTransform:0, line 1226 <- wrt source file 2025-03-17T18:45:18.6197983Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/transforms.py::CumulativeDistributionTransform:0 2025-03-17T18:45:18.6200575Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/uniform.py::Uniform:0, line 19 <- wrt source file 2025-03-17T18:45:18.6202936Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/uniform.py::Uniform:0 2025-03-17T18:45:18.6205237Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/utils.py::clamp_probs:0, line 109 <- wrt source file 2025-03-17T18:45:18.6207624Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/utils.py::clamp_probs:0 2025-03-17T18:45:18.6209964Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/von_mises.py::VonMises:0, line 116 <- wrt source file 2025-03-17T18:45:18.6212364Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/von_mises.py::VonMises:0 2025-03-17T18:45:18.6214675Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/weibull.py::Weibull:0, line 20 <- wrt source file 2025-03-17T18:45:18.6217060Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/weibull.py::Weibull:0 2025-03-17T18:45:18.6219352Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/wishart.py::Wishart:0, line 39 <- wrt source file 2025-03-17T18:45:18.6221702Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/wishart.py::Wishart:0 2025-03-17T18:45:18.6224098Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/dynamic_shapes.py::ShapesCollection:0, line 611 <- wrt source file 2025-03-17T18:45:18.6226719Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/export/dynamic_shapes.py::ShapesCollection:0 2025-03-17T18:45:18.6228992Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::_snake_case:0, line 101 <- wrt source file 2025-03-17T18:45:18.6231115Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::_snake_case:0 2025-03-17T18:45:18.6233357Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::Graph.eliminate_dead_code:0, line 1804 <- wrt source file 2025-03-17T18:45:18.6235780Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::Graph.eliminate_dead_code:0 2025-03-17T18:45:18.6238253Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::Graph.on_generate_code:0, line 1878 <- wrt source file 2025-03-17T18:45:18.6240599Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/graph.py::Graph.on_generate_code:0 2025-03-17T18:45:18.6242883Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/interpreter.py::Interpreter:0, line 48 <- wrt source file 2025-03-17T18:45:18.6245167Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/interpreter.py::Interpreter:0 2025-03-17T18:45:18.6247406Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/interpreter.py::Transformer:0, line 464 <- wrt source file 2025-03-17T18:45:18.6249788Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/interpreter.py::Transformer:0 2025-03-17T18:45:18.6252115Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/subgraph_rewriter.py::replace_pattern:0, line 125 <- wrt source file 2025-03-17T18:45:18.6254587Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/subgraph_rewriter.py::replace_pattern:0 2025-03-17T18:45:18.6256892Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::TensorType:0, line 12 <- wrt source file 2025-03-17T18:45:18.6259124Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::TensorType:0 2025-03-17T18:45:18.6261330Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::is_consistent:0, line 65 <- wrt source file 2025-03-17T18:45:18.6263595Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::is_consistent:0 2025-03-17T18:45:18.6265827Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::is_more_precise:0, line 93 <- wrt source file 2025-03-17T18:45:18.6268191Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/tensor_type.py::is_more_precise:0 2025-03-17T18:45:18.6270747Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/rewriter.py::AST_Rewriter.visit_AnnAssign:0, line 96 <- wrt source file 2025-03-17T18:45:18.6273584Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/rewriter.py::AST_Rewriter.visit_AnnAssign:0 2025-03-17T18:45:18.6276296Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/core.py::reify:0, line 58 <- wrt source file 2025-03-17T18:45:18.6278871Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/core.py::reify:0 2025-03-17T18:45:18.6281499Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/match.py::VarDispatcher:0, line 48 <- wrt source file 2025-03-17T18:45:18.6284325Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/match.py::VarDispatcher:0 2025-03-17T18:45:18.6286983Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::unifiable:0, line 11 <- wrt source file 2025-03-17T18:45:18.6289646Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::unifiable:0 2025-03-17T18:45:18.6292282Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::reify_object:0, line 37 <- wrt source file 2025-03-17T18:45:18.6294986Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::reify_object:0 2025-03-17T18:45:18.6297632Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::unify_object:0, line 93 <- wrt source file 2025-03-17T18:45:18.6300338Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/more.py::unify_object:0 2025-03-17T18:45:18.6303069Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::merge:0, line 37 <- wrt source file 2025-03-17T18:45:18.6305933Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::merge:0 2025-03-17T18:45:18.6308911Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::merge_with:0, line 64 <- wrt source file 2025-03-17T18:45:18.6311861Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::merge_with:0 2025-03-17T18:45:18.6314729Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::valmap:0, line 90 <- wrt source file 2025-03-17T18:45:18.6317609Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::valmap:0 2025-03-17T18:45:18.6320436Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::keymap:0, line 106 <- wrt source file 2025-03-17T18:45:18.6323326Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::keymap:0 2025-03-17T18:45:18.6326172Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::itemmap:0, line 122 <- wrt source file 2025-03-17T18:45:18.6329086Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::itemmap:0 2025-03-17T18:45:18.6331964Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::valfilter:0, line 138 <- wrt source file 2025-03-17T18:45:18.6334926Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::valfilter:0 2025-03-17T18:45:18.6337990Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::keyfilter:0, line 158 <- wrt source file 2025-03-17T18:45:18.6340969Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::keyfilter:0 2025-03-17T18:45:18.6343885Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::itemfilter:0, line 178 <- wrt source file 2025-03-17T18:45:18.6346984Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::itemfilter:0 2025-03-17T18:45:18.6349849Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc:0, line 204 <- wrt source file 2025-03-17T18:45:18.6352715Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::assoc:0 2025-03-17T18:45:18.6355548Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::dissoc:0, line 221 <- wrt source file 2025-03-17T18:45:18.6358445Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::dissoc:0 2025-03-17T18:45:18.6361256Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::first:0, line 416 <- wrt source file 2025-03-17T18:45:18.6364128Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py::first:0 2025-03-17T18:45:18.6366886Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::transitive_get:0, line 15 <- wrt source file 2025-03-17T18:45:18.6369687Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::transitive_get:0 2025-03-17T18:45:18.6372476Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::_toposort:0, line 42 <- wrt source file 2025-03-17T18:45:18.6375158Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::_toposort:0 2025-03-17T18:45:18.6377810Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::reverse_dict:0, line 70 <- wrt source file 2025-03-17T18:45:18.6380545Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::reverse_dict:0 2025-03-17T18:45:18.6383171Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::freeze:0, line 95 <- wrt source file 2025-03-17T18:45:18.6385780Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/utils.py::freeze:0 2025-03-17T18:45:18.6388488Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/variable.py::variables:0, line 67 <- wrt source file 2025-03-17T18:45:18.6391253Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/variable.py::variables:0 2025-03-17T18:45:18.6394109Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/core.py::dispatch:0, line 20 <- wrt source file 2025-03-17T18:45:18.6397164Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/core.py::dispatch:0 2025-03-17T18:45:18.6400318Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher:0, line 113 <- wrt source file 2025-03-17T18:45:18.6403593Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher:0 2025-03-17T18:45:18.6406883Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.register:0, line 138 <- wrt source file 2025-03-17T18:45:18.6410390Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.register:0 2025-03-17T18:45:18.6413715Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.add:0, line 191 <- wrt source file 2025-03-17T18:45:18.6417051Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.add:0 2025-03-17T18:45:18.6420403Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.dispatch:0, line 304 <- wrt source file 2025-03-17T18:45:18.6423877Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::Dispatcher.dispatch:0 2025-03-17T18:45:18.6427253Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::str_signature:0, line 434 <- wrt source file 2025-03-17T18:45:18.6430581Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/dispatcher.py::str_signature:0 2025-03-17T18:45:18.6433751Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::expand_tuples:0, line 18 <- wrt source file 2025-03-17T18:45:18.6437167Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::expand_tuples:0 2025-03-17T18:45:18.6440238Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::_toposort:0, line 41 <- wrt source file 2025-03-17T18:45:18.6443325Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::_toposort:0 2025-03-17T18:45:18.6446368Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::reverse_dict:0, line 68 <- wrt source file 2025-03-17T18:45:18.6449517Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::reverse_dict:0 2025-03-17T18:45:18.6452552Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::groupby:0, line 87 <- wrt source file 2025-03-17T18:45:18.6455602Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::groupby:0 2025-03-17T18:45:18.6458624Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::typename:0, line 117 <- wrt source file 2025-03-17T18:45:18.6461690Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/utils.py::typename:0 2025-03-17T18:45:18.6464803Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::isvariadic:0, line 47 <- wrt source file 2025-03-17T18:45:18.6468072Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::isvariadic:0 2025-03-17T18:45:18.6471174Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::Variadic:0, line 83 <- wrt source file 2025-03-17T18:45:18.6474376Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/multipledispatch/variadic.py::Variadic:0 2025-03-17T18:45:18.6477236Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/graph_drawer.py::FxGraphDrawer.get_dot_graph:0, line 122 <- wrt source file 2025-03-17T18:45:18.6479989Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/graph_drawer.py::FxGraphDrawer.get_dot_graph:0 2025-03-17T18:45:18.6482471Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/shape_prop.py::ShapeProp:0, line 92 <- wrt source file 2025-03-17T18:45:18.6484818Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/shape_prop.py::ShapeProp:0 2025-03-17T18:45:18.6487155Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/split_module.py::split_module:0, line 85 <- wrt source file 2025-03-17T18:45:18.6489590Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/split_module.py::split_module:0 2025-03-17T18:45:18.6492488Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/utils/matcher_with_name_node_map_utils.py::SubgraphMatcherWithNameNodeMap:0, line 51 <- wrt source file 2025-03-17T18:45:18.6495848Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/passes/utils/matcher_with_name_node_map_utils.py::SubgraphMatcherWithNameNodeMap:0 2025-03-17T18:45:18.6498829Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py::AttributeTypeIsSupportedChecker:0, line 36 <- wrt source file 2025-03-17T18:45:18.6501442Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/_check.py::AttributeTypeIsSupportedChecker:0 2025-03-17T18:45:18.6503983Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_load_for_lite_interpreter:0, line 22 <- wrt source file 2025-03-17T18:45:18.6506652Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_load_for_lite_interpreter:0 2025-03-17T18:45:18.6509280Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_get_mobile_model_contained_types:0, line 122 <- wrt source file 2025-03-17T18:45:18.6512015Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_get_mobile_model_contained_types:0 2025-03-17T18:45:18.6514592Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_get_model_ops_and_info:0, line 214 <- wrt source file 2025-03-17T18:45:18.6517110Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/jit/mobile/__init__.py::_get_model_ops_and_info:0 2025-03-17T18:45:18.6519420Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/masked/_ops.py::logaddexp:0, line 1529 <- wrt source file 2025-03-17T18:45:18.6521571Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/masked/_ops.py::logaddexp:0 2025-03-17T18:45:18.6523955Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/masked/maskedtensor/core.py::is_masked_tensor:0, line 25 <- wrt source file 2025-03-17T18:45:18.6526534Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/masked/maskedtensor/core.py::is_masked_tensor:0 2025-03-17T18:45:18.6529128Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::fractional_max_pool2d_with_indices:0, line 467 <- wrt source file 2025-03-17T18:45:18.6531803Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::fractional_max_pool2d_with_indices:0 2025-03-17T18:45:18.6534485Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::fractional_max_pool3d_with_indices:0, line 586 <- wrt source file 2025-03-17T18:45:18.7113365Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::fractional_max_pool3d_with_indices:0 2025-03-17T18:45:18.7137662Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::gumbel_softmax:0, line 2181 <- wrt source file 2025-03-17T18:45:18.7146990Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::gumbel_softmax:0 2025-03-17T18:45:18.7149688Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::embedding:0, line 2487 <- wrt source file 2025-03-17T18:45:18.7156135Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::embedding:0 2025-03-17T18:45:18.7158363Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::embedding_bag:0, line 2627 <- wrt source file 2025-03-17T18:45:18.7165918Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::embedding_bag:0 2025-03-17T18:45:18.7168116Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::ctc_loss:0, line 3059 <- wrt source file 2025-03-17T18:45:18.7182692Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::ctc_loss:0 2025-03-17T18:45:18.7185190Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::nll_loss:0, line 3136 <- wrt source file 2025-03-17T18:45:18.7189185Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::nll_loss:0 2025-03-17T18:45:18.7191399Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::cross_entropy:0, line 3466 <- wrt source file 2025-03-17T18:45:18.7198260Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::cross_entropy:0 2025-03-17T18:45:18.7200594Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::binary_cross_entropy:0, line 3538 <- wrt source file 2025-03-17T18:45:18.7205008Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::binary_cross_entropy:0 2025-03-17T18:45:18.7207519Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::binary_cross_entropy_with_logits:0, line 3615 <- wrt source file 2025-03-17T18:45:18.7211478Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::binary_cross_entropy_with_logits:0 2025-03-17T18:45:18.7213801Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::pad:0, line 5178 <- wrt source file 2025-03-17T18:45:18.7221212Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/functional.py::pad:0 2025-03-17T18:45:18.7223394Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv1d_input:0, line 32 <- wrt source file 2025-03-17T18:45:18.7229126Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv1d_input:0 2025-03-17T18:45:18.7231223Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv1d_weight:0, line 79 <- wrt source file 2025-03-17T18:45:18.7234089Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv1d_weight:0 2025-03-17T18:45:18.7236170Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv2d_input:0, line 130 <- wrt source file 2025-03-17T18:45:18.7241812Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv2d_input:0 2025-03-17T18:45:18.7243922Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv2d_weight:0, line 177 <- wrt source file 2025-03-17T18:45:18.7246583Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv2d_weight:0 2025-03-17T18:45:18.7248671Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv3d_input:0, line 228 <- wrt source file 2025-03-17T18:45:18.7282033Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv3d_input:0 2025-03-17T18:45:18.7284681Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv3d_weight:0, line 275 <- wrt source file 2025-03-17T18:45:18.7318259Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/grad.py::conv3d_weight:0 2025-03-17T18:45:18.7320736Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::calculate_gain:0, line 102 <- wrt source file 2025-03-17T18:45:18.7323074Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::calculate_gain:0 2025-03-17T18:45:18.7325121Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::uniform_:0, line 159 <- wrt source file 2025-03-17T18:45:18.7327349Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::uniform_:0 2025-03-17T18:45:18.7329358Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::normal_:0, line 186 <- wrt source file 2025-03-17T18:45:18.7331380Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::normal_:0 2025-03-17T18:45:18.7333433Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::trunc_normal_:0, line 221 <- wrt source file 2025-03-17T18:45:18.7352460Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::trunc_normal_:0 2025-03-17T18:45:18.7354536Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::constant_:0, line 235 <- wrt source file 2025-03-17T18:45:18.7356595Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::constant_:0 2025-03-17T18:45:18.7358582Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::ones_:0, line 252 <- wrt source file 2025-03-17T18:45:18.7360579Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::ones_:0 2025-03-17T18:45:18.7362537Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::zeros_:0, line 265 <- wrt source file 2025-03-17T18:45:18.7364547Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::zeros_:0 2025-03-17T18:45:18.7366492Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::eye_:0, line 281 <- wrt source file 2025-03-17T18:45:18.7368574Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::eye_:0 2025-03-17T18:45:18.7370538Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::dirac_:0, line 303 <- wrt source file 2025-03-17T18:45:18.7372549Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::dirac_:0 2025-03-17T18:45:18.7374613Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::xavier_uniform_:0, line 389 <- wrt source file 2025-03-17T18:45:18.7376885Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::xavier_uniform_:0 2025-03-17T18:45:18.7379009Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::xavier_normal_:0, line 429 <- wrt source file 2025-03-17T18:45:18.7381172Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::xavier_normal_:0 2025-03-17T18:45:18.7383291Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::kaiming_uniform_:0, line 488 <- wrt source file 2025-03-17T18:45:18.7385484Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::kaiming_uniform_:0 2025-03-17T18:45:18.7387679Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::kaiming_normal_:0, line 553 <- wrt source file 2025-03-17T18:45:18.7389852Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::kaiming_normal_:0 2025-03-17T18:45:18.7391958Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::orthogonal_:0, line 592 <- wrt source file 2025-03-17T18:45:18.7394074Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::orthogonal_:0 2025-03-17T18:45:18.7396101Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::sparse_:0, line 645 <- wrt source file 2025-03-17T18:45:18.7398139Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/init.py::sparse_:0 2025-03-17T18:45:18.7400419Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/attention/__init__.py::sdpa_kernel:0, line 104 <- wrt source file 2025-03-17T18:45:18.7402791Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/attention/__init__.py::sdpa_kernel:0 2025-03-17T18:45:18.7405098Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/attention/bias.py::CausalBias:0, line 94 <- wrt source file 2025-03-17T18:45:18.7407407Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/attention/bias.py::CausalBias:0 2025-03-17T18:45:18.7409693Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Threshold:0, line 70 <- wrt source file 2025-03-17T18:45:18.7412070Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Threshold:0 2025-03-17T18:45:18.7414344Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ReLU:0, line 112 <- wrt source file 2025-03-17T18:45:18.7416606Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ReLU:0 2025-03-17T18:45:18.7418843Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::RReLU:0, line 171 <- wrt source file 2025-03-17T18:45:18.7421123Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::RReLU:0 2025-03-17T18:45:18.7423435Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardtanh:0, line 227 <- wrt source file 2025-03-17T18:45:18.7425804Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardtanh:0 2025-03-17T18:45:18.7428149Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ReLU6:0, line 292 <- wrt source file 2025-03-17T18:45:18.7430444Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ReLU6:0 2025-03-17T18:45:18.7432719Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Sigmoid:0, line 320 <- wrt source file 2025-03-17T18:45:18.7435095Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Sigmoid:0 2025-03-17T18:45:18.7437597Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardsigmoid:0, line 352 <- wrt source file 2025-03-17T18:45:18.7440061Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardsigmoid:0 2025-03-17T18:45:18.7442372Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Tanh:0, line 385 <- wrt source file 2025-03-17T18:45:18.7444651Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Tanh:0 2025-03-17T18:45:18.7446886Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::SiLU:0, line 418 <- wrt source file 2025-03-17T18:45:18.7449159Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::SiLU:0 2025-03-17T18:45:18.7451383Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Mish:0, line 457 <- wrt source file 2025-03-17T18:45:18.7453664Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Mish:0 2025-03-17T18:45:18.7456048Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardswish:0, line 502 <- wrt source file 2025-03-17T18:45:18.7458445Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardswish:0 2025-03-17T18:45:18.7460725Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ELU:0, line 545 <- wrt source file 2025-03-17T18:45:18.7463004Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::ELU:0 2025-03-17T18:45:18.7465231Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::CELU:0, line 587 <- wrt source file 2025-03-17T18:45:18.7467586Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::CELU:0 2025-03-17T18:45:18.7469808Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::SELU:0, line 640 <- wrt source file 2025-03-17T18:45:18.7472090Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::SELU:0 2025-03-17T18:45:18.7474313Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::GLU:0, line 678 <- wrt source file 2025-03-17T18:45:18.7476577Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::GLU:0 2025-03-17T18:45:18.7478796Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::GELU:0, line 720 <- wrt source file 2025-03-17T18:45:18.7481129Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::GELU:0 2025-03-17T18:45:18.7483437Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardshrink:0, line 763 <- wrt source file 2025-03-17T18:45:18.7485851Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Hardshrink:0 2025-03-17T18:45:18.7488202Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LeakyReLU:0, line 812 <- wrt source file 2025-03-17T18:45:18.7490652Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LeakyReLU:0 2025-03-17T18:45:18.7492997Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LogSigmoid:0, line 848 <- wrt source file 2025-03-17T18:45:18.7495407Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LogSigmoid:0 2025-03-17T18:45:18.7497743Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softplus:0, line 881 <- wrt source file 2025-03-17T18:45:18.7500104Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softplus:0 2025-03-17T18:45:18.7502432Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softshrink:0, line 924 <- wrt source file 2025-03-17T18:45:18.7504847Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softshrink:0 2025-03-17T18:45:18.7507370Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::MultiheadAttention:0, line 1031 <- wrt source file 2025-03-17T18:45:18.7509987Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::MultiheadAttention:0 2025-03-17T18:45:18.7512402Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::PReLU:0, line 1494 <- wrt source file 2025-03-17T18:45:18.7514800Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::PReLU:0 2025-03-17T18:45:18.7517104Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softsign:0, line 1536 <- wrt source file 2025-03-17T18:45:18.7519470Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softsign:0 2025-03-17T18:45:18.7521804Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Tanhshrink:0, line 1559 <- wrt source file 2025-03-17T18:45:18.7524233Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Tanhshrink:0 2025-03-17T18:45:18.7526554Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmin:0, line 1594 <- wrt source file 2025-03-17T18:45:18.7528894Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmin:0 2025-03-17T18:45:18.7531187Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmax:0, line 1652 <- wrt source file 2025-03-17T18:45:18.7533951Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmax:0 2025-03-17T18:45:18.7536643Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmax2d:0, line 1693 <- wrt source file 2025-03-17T18:45:18.7539564Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::Softmax2d:0 2025-03-17T18:45:18.7541927Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LogSoftmax:0, line 1729 <- wrt source file 2025-03-17T18:45:18.7544340Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/activation.py::LogSoftmax:0 2025-03-17T18:45:18.7547157Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm1d:0, line 330 <- wrt source file 2025-03-17T18:45:18.7549955Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm1d:0 2025-03-17T18:45:18.7552674Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm2d:0, line 441 <- wrt source file 2025-03-17T18:45:18.7811656Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm2d:0 2025-03-17T18:45:18.7814458Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm3d:0, line 552 <- wrt source file 2025-03-17T18:45:19.0374537Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py::BatchNorm3d:0 2025-03-17T18:45:19.0535099Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/channelshuffle.py::ChannelShuffle:0, line 21 <- wrt source file 2025-03-17T18:45:19.0556297Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/channelshuffle.py::ChannelShuffle:0 2025-03-17T18:45:19.0558804Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::Sequential:0, line 76 <- wrt source file 2025-03-17T18:45:19.0561210Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::Sequential:0 2025-03-17T18:45:19.0563552Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ModuleList:0, line 282 <- wrt source file 2025-03-17T18:45:19.0566284Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ModuleList:0 2025-03-17T18:45:19.0568640Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ModuleDict:0, line 464 <- wrt source file 2025-03-17T18:45:19.0571029Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ModuleDict:0 2025-03-17T18:45:19.0573407Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ParameterList:0, line 596 <- wrt source file 2025-03-17T18:45:19.0575864Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ParameterList:0 2025-03-17T18:45:19.0578261Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ParameterDict:0, line 748 <- wrt source file 2025-03-17T18:45:19.0580720Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/container.py::ParameterDict:0 2025-03-17T18:45:19.0583164Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/distance.py::PairwiseDistance:0, line 38 <- wrt source file 2025-03-17T18:45:19.0585669Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/distance.py::PairwiseDistance:0 2025-03-17T18:45:19.0588191Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/distance.py::CosineSimilarity:0, line 77 <- wrt source file 2025-03-17T18:45:19.0590698Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/distance.py::CosineSimilarity:0 2025-03-17T18:45:19.0593438Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout:0, line 60 <- wrt source file 2025-03-17T18:45:19.0595704Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout:0 2025-03-17T18:45:19.0597954Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout1d:0, line 105 <- wrt source file 2025-03-17T18:45:19.0600272Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout1d:0 2025-03-17T18:45:19.0602604Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout2d:0, line 157 <- wrt source file 2025-03-17T18:45:19.0615699Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout2d:0 2025-03-17T18:45:19.0618380Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout3d:0, line 202 <- wrt source file 2025-03-17T18:45:19.0695416Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::Dropout3d:0 2025-03-17T18:45:19.0698139Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::AlphaDropout:0, line 245 <- wrt source file 2025-03-17T18:45:19.0700690Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::AlphaDropout:0 2025-03-17T18:45:19.0703123Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::FeatureAlphaDropout:0, line 294 <- wrt source file 2025-03-17T18:45:19.0777698Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/dropout.py::FeatureAlphaDropout:0 2025-03-17T18:45:19.0780335Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/flatten.py::Flatten:0, line 30 <- wrt source file 2025-03-17T18:45:19.0786011Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/flatten.py::Flatten:0 2025-03-17T18:45:19.0788676Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/fold.py::Fold:0, line 111 <- wrt source file 2025-03-17T18:45:19.0791955Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/fold.py::Fold:0 2025-03-17T18:45:19.0794096Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/fold.py::Unfold:0, line 261 <- wrt source file 2025-03-17T18:45:19.0807476Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/fold.py::Unfold:0 2025-03-17T18:45:19.0809883Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm1d:0, line 187 <- wrt source file 2025-03-17T18:45:19.0821856Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm1d:0 2025-03-17T18:45:19.0824412Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm2d:0, line 303 <- wrt source file 2025-03-17T18:45:19.1011870Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm2d:0 2025-03-17T18:45:19.1014928Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm3d:0, line 419 <- wrt source file 2025-03-17T18:45:19.3561706Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/instancenorm.py::InstanceNorm3d:0 2025-03-17T18:45:19.3720624Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/lazy.py::LazyModuleMixin:0, line 87 <- wrt source file 2025-03-17T18:45:19.3723217Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/lazy.py::LazyModuleMixin:0 2025-03-17T18:45:19.3725517Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Identity:0, line 34 <- wrt source file 2025-03-17T18:45:19.3730420Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Identity:0 2025-03-17T18:45:19.3732779Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Linear:0, line 80 <- wrt source file 2025-03-17T18:45:19.3739994Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Linear:0 2025-03-17T18:45:19.3742272Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Bilinear:0, line 179 <- wrt source file 2025-03-17T18:45:19.3761105Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/linear.py::Bilinear:0 2025-03-17T18:45:19.3763343Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::L1Loss:0, line 115 <- wrt source file 2025-03-17T18:45:19.3769133Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::L1Loss:0 2025-03-17T18:45:19.3771293Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::NLLLoss:0, line 211 <- wrt source file 2025-03-17T18:45:19.3803627Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::NLLLoss:0 2025-03-17T18:45:19.3806357Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::PoissonNLLLoss:0, line 321 <- wrt source file 2025-03-17T18:45:19.3810865Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::PoissonNLLLoss:0 2025-03-17T18:45:19.3813399Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::GaussianNLLLoss:0, line 406 <- wrt source file 2025-03-17T18:45:19.3825891Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::GaussianNLLLoss:0 2025-03-17T18:45:19.3828236Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::KLDivLoss:0, line 519 <- wrt source file 2025-03-17T18:45:19.3835515Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::KLDivLoss:0 2025-03-17T18:45:19.3838248Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MSELoss:0, line 597 <- wrt source file 2025-03-17T18:45:19.3841927Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MSELoss:0 2025-03-17T18:45:19.3844090Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::BCELoss:0, line 679 <- wrt source file 2025-03-17T18:45:19.3848895Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::BCELoss:0 2025-03-17T18:45:19.3851174Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::BCEWithLogitsLoss:0, line 750 <- wrt source file 2025-03-17T18:45:19.3860629Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::BCEWithLogitsLoss:0 2025-03-17T18:45:19.3863171Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MultiLabelMarginLoss:0, line 943 <- wrt source file 2025-03-17T18:45:19.3868891Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MultiLabelMarginLoss:0 2025-03-17T18:45:19.3871325Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CrossEntropyLoss:0, line 1265 <- wrt source file 2025-03-17T18:45:19.3878151Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CrossEntropyLoss:0 2025-03-17T18:45:19.3881052Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CosineEmbeddingLoss:0, line 1405 <- wrt source file 2025-03-17T18:45:19.3888360Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CosineEmbeddingLoss:0 2025-03-17T18:45:19.3890784Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MarginRankingLoss:0, line 1470 <- wrt source file 2025-03-17T18:45:19.3896059Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MarginRankingLoss:0 2025-03-17T18:45:19.3898430Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MultiMarginLoss:0, line 1549 <- wrt source file 2025-03-17T18:45:19.3904814Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::MultiMarginLoss:0 2025-03-17T18:45:19.3907260Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::TripletMarginLoss:0, line 1649 <- wrt source file 2025-03-17T18:45:19.3917010Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::TripletMarginLoss:0 2025-03-17T18:45:19.3919294Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CTCLoss:0, line 1890 <- wrt source file 2025-03-17T18:45:19.3949075Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py::CTCLoss:0 2025-03-17T18:45:19.3951511Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.register_buffer:0, line 538 <- wrt source file 2025-03-17T18:45:19.3954379Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.register_buffer:0 2025-03-17T18:45:19.3956889Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.apply:0, line 1020 <- wrt source file 2025-03-17T18:45:19.3964331Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.apply:0 2025-03-17T18:45:19.3966614Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.to:0, line 1274 <- wrt source file 2025-03-17T18:45:19.3971156Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.to:0 2025-03-17T18:45:19.3973488Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.state_dict:0, line 2192 <- wrt source file 2025-03-17T18:45:19.3975958Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.state_dict:0 2025-03-17T18:45:19.3978383Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.parameters:0, line 2634 <- wrt source file 2025-03-17T18:45:19.3980836Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.parameters:0 2025-03-17T18:45:19.3983320Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_parameters:0, line 2662 <- wrt source file 2025-03-17T18:45:19.3985984Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_parameters:0 2025-03-17T18:45:19.3988487Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.buffers:0, line 2689 <- wrt source file 2025-03-17T18:45:19.3990876Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.buffers:0 2025-03-17T18:45:19.3993273Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_buffers:0, line 2716 <- wrt source file 2025-03-17T18:45:19.3995856Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_buffers:0 2025-03-17T18:45:19.3998336Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_children:0, line 2747 <- wrt source file 2025-03-17T18:45:19.4000855Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_children:0 2025-03-17T18:45:19.4003313Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.modules:0, line 2771 <- wrt source file 2025-03-17T18:45:19.4005711Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.modules:0 2025-03-17T18:45:19.4008045Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_modules:0, line 2809 <- wrt source file 2025-03-17T18:45:19.4010714Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/module.py::Module.named_modules:0 2025-03-17T18:45:19.4013254Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::LocalResponseNorm:0, line 38 <- wrt source file 2025-03-17T18:45:19.4027847Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::LocalResponseNorm:0 2025-03-17T18:45:19.4031169Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::LayerNorm:0, line 151 <- wrt source file 2025-03-17T18:45:19.4037872Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::LayerNorm:0 2025-03-17T18:45:19.4040686Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::GroupNorm:0, line 262 <- wrt source file 2025-03-17T18:45:19.4046692Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::GroupNorm:0 2025-03-17T18:45:19.4049466Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::RMSNorm:0, line 355 <- wrt source file 2025-03-17T18:45:19.4053234Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/normalization.py::RMSNorm:0 2025-03-17T18:45:19.4055595Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad1d:0, line 69 <- wrt source file 2025-03-17T18:45:19.4060537Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad1d:0 2025-03-17T18:45:19.4081210Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad2d:0, line 120 <- wrt source file 2025-03-17T18:45:19.4083630Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad2d:0 2025-03-17T18:45:19.4085993Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad3d:0, line 184 <- wrt source file 2025-03-17T18:45:20.0582984Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::CircularPad3d:0 2025-03-17T18:45:20.0851686Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad1d:0, line 238 <- wrt source file 2025-03-17T18:45:20.0862088Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad1d:0 2025-03-17T18:45:20.0864498Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad2d:0, line 291 <- wrt source file 2025-03-17T18:45:20.0868361Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad2d:0 2025-03-17T18:45:20.0870728Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad3d:0, line 347 <- wrt source file 2025-03-17T18:45:20.0892751Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ConstantPad3d:0 2025-03-17T18:45:20.0895146Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad1d:0, line 391 <- wrt source file 2025-03-17T18:45:20.0900310Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad1d:0 2025-03-17T18:45:20.0902713Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad2d:0, line 435 <- wrt source file 2025-03-17T18:45:20.0907547Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad2d:0 2025-03-17T18:45:20.0909961Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad3d:0, line 492 <- wrt source file 2025-03-17T18:45:20.0912900Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReflectionPad3d:0 2025-03-17T18:45:20.0915287Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad1d:0, line 550 <- wrt source file 2025-03-17T18:45:20.0921059Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad1d:0 2025-03-17T18:45:20.0923462Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad2d:0, line 593 <- wrt source file 2025-03-17T18:45:20.0928751Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad2d:0 2025-03-17T18:45:20.0931154Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad3d:0, line 650 <- wrt source file 2025-03-17T18:45:20.6206242Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ReplicationPad3d:0 2025-03-17T18:45:20.6473926Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad1d:0, line 684 <- wrt source file 2025-03-17T18:45:20.6484294Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad1d:0 2025-03-17T18:45:20.6486415Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad2d:0, line 739 <- wrt source file 2025-03-17T18:45:20.6492228Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad2d:0 2025-03-17T18:45:20.6494550Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad3d:0, line 798 <- wrt source file 2025-03-17T18:45:20.6516387Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/padding.py::ZeroPad3d:0 2025-03-17T18:45:20.6518789Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pixelshuffle.py::PixelShuffle:0, line 40 <- wrt source file 2025-03-17T18:45:20.6522523Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pixelshuffle.py::PixelShuffle:0 2025-03-17T18:45:20.6525019Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pixelshuffle.py::PixelUnshuffle:0, line 93 <- wrt source file 2025-03-17T18:45:20.6528161Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pixelshuffle.py::PixelUnshuffle:0 2025-03-17T18:45:20.6530546Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool1d:0, line 118 <- wrt source file 2025-03-17T18:45:20.6535101Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool1d:0 2025-03-17T18:45:20.6537554Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool2d:0, line 195 <- wrt source file 2025-03-17T18:45:20.6590697Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool2d:0 2025-03-17T18:45:20.6592936Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool3d:0, line 278 <- wrt source file 2025-03-17T18:45:20.8869503Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxPool3d:0 2025-03-17T18:45:20.8925258Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxUnpool1d:0, line 352 <- wrt source file 2025-03-17T18:45:20.8936595Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxUnpool1d:0 2025-03-17T18:45:20.8939061Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxUnpool3d:0, line 534 <- wrt source file 2025-03-17T18:45:20.9749999Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::MaxUnpool3d:0 2025-03-17T18:45:20.9752362Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool1d:0, line 622 <- wrt source file 2025-03-17T18:45:20.9758995Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool1d:0 2025-03-17T18:45:20.9761287Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool2d:0, line 714 <- wrt source file 2025-03-17T18:45:20.9802598Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool2d:0 2025-03-17T18:45:20.9804885Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool3d:0, line 827 <- wrt source file 2025-03-17T18:45:21.1476498Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AvgPool3d:0 2025-03-17T18:45:21.1533219Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool2d:0, line 917 <- wrt source file 2025-03-17T18:45:21.1584541Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool2d:0 2025-03-17T18:45:21.1587088Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool3d:0, line 1003 <- wrt source file 2025-03-17T18:45:21.2359809Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::FractionalMaxPool3d:0 2025-03-17T18:45:21.2362002Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool1d:0, line 1117 <- wrt source file 2025-03-17T18:45:21.2370682Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool1d:0 2025-03-17T18:45:21.2373390Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool2d:0, line 1168 <- wrt source file 2025-03-17T18:45:21.2426235Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool2d:0 2025-03-17T18:45:21.2429443Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool3d:0, line 1227 <- wrt source file 2025-03-17T18:45:21.4750503Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::LPPool3d:0 2025-03-17T18:45:21.4806616Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool1d:0, line 1282 <- wrt source file 2025-03-17T18:45:21.4813605Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool1d:0 2025-03-17T18:45:21.4816101Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool2d:0, line 1316 <- wrt source file 2025-03-17T18:45:21.4823557Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool2d:0 2025-03-17T18:45:21.4826047Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool3d:0, line 1359 <- wrt source file 2025-03-17T18:45:21.4920455Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveMaxPool3d:0 2025-03-17T18:45:21.4922964Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool1d:0, line 1406 <- wrt source file 2025-03-17T18:45:21.4925785Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool1d:0 2025-03-17T18:45:21.4928275Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool2d:0, line 1437 <- wrt source file 2025-03-17T18:45:21.4933932Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool2d:0 2025-03-17T18:45:21.4936512Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool3d:0, line 1476 <- wrt source file 2025-03-17T18:45:21.4958819Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py::AdaptiveAvgPool3d:0 2025-03-17T18:45:21.4961074Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::RNN:0, line 591 <- wrt source file 2025-03-17T18:45:21.4970262Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::RNN:0 2025-03-17T18:45:21.4972361Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::LSTM:0, line 948 <- wrt source file 2025-03-17T18:45:21.5314942Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::LSTM:0 2025-03-17T18:45:21.5317778Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::GRU:0, line 1285 <- wrt source file 2025-03-17T18:45:21.5331103Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::GRU:0 2025-03-17T18:45:21.5333483Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::RNNCell:0, line 1536 <- wrt source file 2025-03-17T18:45:21.5343634Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::RNNCell:0 2025-03-17T18:45:21.5345834Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::LSTMCell:0, line 1658 <- wrt source file 2025-03-17T18:45:21.5355605Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::LSTMCell:0 2025-03-17T18:45:21.5358053Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::GRUCell:0, line 1772 <- wrt source file 2025-03-17T18:45:21.5369756Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/rnn.py::GRUCell:0 2025-03-17T18:45:21.5373823Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::Embedding:0, line 69 <- wrt source file 2025-03-17T18:45:21.5385364Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::Embedding:0 2025-03-17T18:45:21.5387901Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::Embedding.from_pretrained:0, line 241 <- wrt source file 2025-03-17T18:45:21.5390900Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::Embedding.from_pretrained:0 2025-03-17T18:45:21.5393524Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::EmbeddingBag.from_pretrained:0, line 519 <- wrt source file 2025-03-17T18:45:21.5397701Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py::EmbeddingBag.from_pretrained:0 2025-03-17T18:45:21.5400272Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::Transformer:0, line 88 <- wrt source file 2025-03-17T18:45:22.4007833Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::Transformer:0 2025-03-17T18:45:22.4023989Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::Transformer.forward:0, line 256 <- wrt source file 2025-03-17T18:45:22.4026809Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::Transformer.forward:0 2025-03-17T18:45:22.4029398Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerEncoder:0, line 326 <- wrt source file 2025-03-17T18:45:22.5195870Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerEncoder:0 2025-03-17T18:45:22.5234057Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerDecoder:0, line 544 <- wrt source file 2025-03-17T18:45:22.7733403Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerDecoder:0 2025-03-17T18:45:22.7741870Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerEncoderLayer:0, line 667 <- wrt source file 2025-03-17T18:45:22.8049119Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerEncoderLayer:0 2025-03-17T18:45:22.8051743Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerDecoderLayer:0, line 973 <- wrt source file 2025-03-17T18:45:22.8619093Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py::TransformerDecoderLayer:0 2025-03-17T18:45:22.8622033Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::Upsample:0, line 77 <- wrt source file 2025-03-17T18:45:22.8645795Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::Upsample:0 2025-03-17T18:45:22.8648314Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::UpsamplingNearest2d:0, line 223 <- wrt source file 2025-03-17T18:45:22.8657923Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::UpsamplingNearest2d:0 2025-03-17T18:45:22.8660721Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::UpsamplingBilinear2d:0, line 273 <- wrt source file 2025-03-17T18:45:22.8667589Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/upsampling.py::UpsamplingBilinear2d:0 2025-03-17T18:45:22.8670198Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/data_parallel.py::DataParallel:0, line 127 <- wrt source file 2025-03-17T18:45:22.8672757Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/data_parallel.py::DataParallel:0 2025-03-17T18:45:22.8675396Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel:0, line 625 <- wrt source file 2025-03-17T18:45:22.8678196Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel:0 2025-03-17T18:45:22.8681020Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.no_sync:0, line 1423 <- wrt source file 2025-03-17T18:45:22.8683950Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.no_sync:0 2025-03-17T18:45:22.8687222Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:0, line 1975 <- wrt source file 2025-03-17T18:45:22.8690386Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:0 2025-03-17T18:45:22.8693490Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:1, line 1985 <- wrt source file 2025-03-17T18:45:22.8696628Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel.register_comm_hook:1 2025-03-17T18:45:22.8699809Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_builtin_comm_hook:0, line 2020 <- wrt source file 2025-03-17T18:45:22.8703102Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py::DistributedDataParallel._register_builtin_comm_hook:0 2025-03-17T18:45:22.8705976Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_per_sample_grad.py::call_for_per_sample_grads:0, line 35 <- wrt source file 2025-03-17T18:45:22.8708715Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_per_sample_grad.py::call_for_per_sample_grads:0 2025-03-17T18:45:22.8711103Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/init.py::skip_init:0, line 33 <- wrt source file 2025-03-17T18:45:22.8713295Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/init.py::skip_init:0 2025-03-17T18:45:22.8715697Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::orthogonal:0, line 265 <- wrt source file 2025-03-17T18:45:22.8718209Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::orthogonal:0 2025-03-17T18:45:22.8720664Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::weight_norm:0, line 360 <- wrt source file 2025-03-17T18:45:22.8723185Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::weight_norm:0 2025-03-17T18:45:22.8725733Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::spectral_norm:0, line 591 <- wrt source file 2025-03-17T18:45:22.8728304Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrizations.py::spectral_norm:0 2025-03-17T18:45:22.8730893Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrize.py::register_parametrization:0, line 505 <- wrt source file 2025-03-17T18:45:22.8733573Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/parametrize.py::register_parametrization:0 2025-03-17T18:45:22.8735933Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::identity:0, line 844 <- wrt source file 2025-03-17T18:45:22.8738300Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::identity:0 2025-03-17T18:45:22.8740592Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::random_unstructured:0, line 880 <- wrt source file 2025-03-17T18:45:22.8743001Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::random_unstructured:0 2025-03-17T18:45:22.8745333Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::l1_unstructured:0, line 923 <- wrt source file 2025-03-17T18:45:22.8747832Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::l1_unstructured:0 2025-03-17T18:45:22.8750042Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::remove:0, line 1190 <- wrt source file 2025-03-17T18:45:22.8752211Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::remove:0 2025-03-17T18:45:22.8754362Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::is_pruned:0, line 1218 <- wrt source file 2025-03-17T18:45:22.8756571Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py::is_pruned:0 2025-03-17T18:45:22.8758780Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pad_packed_sequence:0, line 354 <- wrt source file 2025-03-17T18:45:22.8761137Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pad_packed_sequence:0 2025-03-17T18:45:22.8763378Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pad_sequence:0, line 432 <- wrt source file 2025-03-17T18:45:22.8778676Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pad_sequence:0 2025-03-17T18:45:22.8780994Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::unpad_sequence:0, line 490 <- wrt source file 2025-03-17T18:45:22.8783349Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::unpad_sequence:0 2025-03-17T18:45:22.8785790Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pack_sequence:0, line 546 <- wrt source file 2025-03-17T18:45:22.8788180Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::pack_sequence:0 2025-03-17T18:45:22.8790495Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::unpack_sequence:0, line 574 <- wrt source file 2025-03-17T18:45:22.8792876Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/rnn.py::unpack_sequence:0 2025-03-17T18:45:22.8795381Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/spectral_norm.py::spectral_norm:0, line 313 <- wrt source file 2025-03-17T18:45:22.8797940Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/spectral_norm.py::spectral_norm:0 2025-03-17T18:45:22.8800512Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/spectral_norm.py::remove_spectral_norm:0, line 345 <- wrt source file 2025-03-17T18:45:22.8805164Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/spectral_norm.py::remove_spectral_norm:0 2025-03-17T18:45:22.8807665Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/stateless.py::functional_call:0, line 196 <- wrt source file 2025-03-17T18:45:22.8810114Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/stateless.py::functional_call:0 2025-03-17T18:45:22.8812440Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py::weight_norm:0, line 133 <- wrt source file 2025-03-17T18:45:22.8815579Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py::weight_norm:0 2025-03-17T18:45:22.8817984Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py::remove_weight_norm:0, line 155 <- wrt source file 2025-03-17T18:45:22.8821155Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py::remove_weight_norm:0 2025-03-17T18:45:22.8823818Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_expanded_weights/conv_utils.py::unfold3d:0, line 315 <- wrt source file 2025-03-17T18:45:22.8826528Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_expanded_weights/conv_utils.py::unfold3d:0 2025-03-17T18:45:22.8829449Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_expanded_weights/expanded_weights_utils.py::sum_over_all_but_batch_and_last_n:0, line 178 <- wrt source file 2025-03-17T18:45:22.8847248Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/_expanded_weights/expanded_weights_utils.py::sum_over_all_but_batch_and_last_n:0 2025-03-17T18:45:22.8849974Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::LambdaLR:0, line 258 <- wrt source file 2025-03-17T18:45:22.8852272Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::LambdaLR:0 2025-03-17T18:45:22.8854601Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::MultiplicativeLR:0, line 353 <- wrt source file 2025-03-17T18:45:22.8857059Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::MultiplicativeLR:0 2025-03-17T18:45:22.8859348Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::StepLR:0, line 446 <- wrt source file 2025-03-17T18:45:22.8861683Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::StepLR:0 2025-03-17T18:45:22.8863932Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::MultiStepLR:0, line 499 <- wrt source file 2025-03-17T18:45:22.8866270Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::MultiStepLR:0 2025-03-17T18:45:22.8868608Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ConstantLR:0, line 557 <- wrt source file 2025-03-17T18:45:22.8870906Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ConstantLR:0 2025-03-17T18:45:22.8873244Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::LinearLR:0, line 628 <- wrt source file 2025-03-17T18:45:22.8875513Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::LinearLR:0 2025-03-17T18:45:22.8877789Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::SequentialLR:0, line 748 <- wrt source file 2025-03-17T18:45:22.8880165Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::SequentialLR:0 2025-03-17T18:45:22.8882484Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::PolynomialLR:0, line 889 <- wrt source file 2025-03-17T18:45:22.8884871Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::PolynomialLR:0 2025-03-17T18:45:22.8887256Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ChainedScheduler:0, line 1037 <- wrt source file 2025-03-17T18:45:22.8889717Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ChainedScheduler:0 2025-03-17T18:45:22.8892148Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ReduceLROnPlateau:0, line 1174 <- wrt source file 2025-03-17T18:45:22.8894737Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::ReduceLROnPlateau:0 2025-03-17T18:45:22.8897075Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CyclicLR:0, line 1414 <- wrt source file 2025-03-17T18:45:22.8899356Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CyclicLR:0 2025-03-17T18:45:22.8901898Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:0, line 1676 <- wrt source file 2025-03-17T18:45:22.8904721Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:0 2025-03-17T18:45:22.8907551Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:1, line 1692 <- wrt source file 2025-03-17T18:45:22.8910393Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::CosineAnnealingWarmRestarts.step:1 2025-03-17T18:45:22.8912904Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::OneCycleLR:0, line 1830 <- wrt source file 2025-03-17T18:45:22.8915236Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/lr_scheduler.py::OneCycleLR:0 2025-03-17T18:45:22.8917451Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py::update_bn:0, line 331 <- wrt source file 2025-03-17T18:45:22.8919716Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py::update_bn:0 2025-03-17T18:45:22.8921938Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/package/glob_group.py::GlobGroup:0, line 22 <- wrt source file 2025-03-17T18:45:22.8924366Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/package/glob_group.py::GlobGroup:0 2025-03-17T18:45:22.8926953Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/profiler/profiler.py::_KinetoProfile.toggle_collection_dynamic:0, line 283 <- wrt source file 2025-03-17T18:45:22.8929918Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/profiler/profiler.py::_KinetoProfile.toggle_collection_dynamic:0 2025-03-17T18:45:22.8932451Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/profiler/profiler.py::profile:0, line 605 <- wrt source file 2025-03-17T18:45:22.8934711Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/profiler/profiler.py::profile:0 2025-03-17T18:45:22.8937284Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/semi_structured.py::to_sparse_semi_structured:0, line 338 <- wrt source file 2025-03-17T18:45:22.8939992Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/semi_structured.py::to_sparse_semi_structured:0 2025-03-17T18:45:22.8942426Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_creation.py::make_tensor:0, line 114 <- wrt source file 2025-03-17T18:45:22.8944731Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_creation.py::make_tensor:0 2025-03-17T18:45:22.8947189Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::parametrize:0, line 614 <- wrt source file 2025-03-17T18:45:22.8949773Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::parametrize:0 2025-03-17T18:45:22.8952314Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::reparametrize:0, line 735 <- wrt source file 2025-03-17T18:45:22.8955041Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::reparametrize:0 2025-03-17T18:45:22.8957580Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::decorateIf:0, line 824 <- wrt source file 2025-03-17T18:45:22.8960142Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::decorateIf:0 2025-03-17T18:45:22.8962823Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_symmetric_psd_matrix:0, line 4649 <- wrt source file 2025-03-17T18:45:22.8965707Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_symmetric_psd_matrix:0 2025-03-17T18:45:22.8968531Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_psd_matrix:0, line 4663 <- wrt source file 2025-03-17T18:45:22.8971385Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_psd_matrix:0 2025-03-17T18:45:22.8974181Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_pd_matrix:0, line 4693 <- wrt source file 2025-03-17T18:45:22.8977028Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/common_utils.py::random_hermitian_pd_matrix:0 2025-03-17T18:45:22.8979765Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/logging_utils.py::logs_to_string:0, line 194 <- wrt source file 2025-03-17T18:45:22.8982389Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/logging_utils.py::logs_to_string:0 2025-03-17T18:45:22.8985068Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/logging_utils.py::multiple_logs_to_string:0, line 220 <- wrt source file 2025-03-17T18:45:22.8987915Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/logging_utils.py::multiple_logs_to_string:0 2025-03-17T18:45:22.8990936Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py::skip_unless_torch_gpu:0, line 313 <- wrt source file 2025-03-17T18:45:22.8994142Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/distributed/_tensor/common_dtensor.py::skip_unless_torch_gpu:0 2025-03-17T18:45:22.8997305Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/optests/autograd_registration.py::autograd_registration_check:0, line 29 <- wrt source file 2025-03-17T18:45:22.9000597Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_internal/optests/autograd_registration.py::autograd_registration_check:0 2025-03-17T18:45:22.9003275Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_is_leaf:0, line 247 <- wrt source file 2025-03-17T18:45:22.9005566Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_is_leaf:0 2025-03-17T18:45:22.9007835Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_flatten:0, line 290 <- wrt source file 2025-03-17T18:45:22.9010148Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_flatten:0 2025-03-17T18:45:22.9012514Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_unflatten:0, line 327 <- wrt source file 2025-03-17T18:45:22.9014870Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_unflatten:0 2025-03-17T18:45:22.9017120Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_iter:0, line 357 <- wrt source file 2025-03-17T18:45:22.9019372Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_iter:0 2025-03-17T18:45:22.9021597Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_leaves:0, line 392 <- wrt source file 2025-03-17T18:45:22.9023879Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_leaves:0 2025-03-17T18:45:22.9026173Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_structure:0, line 427 <- wrt source file 2025-03-17T18:45:22.9028576Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_structure:0 2025-03-17T18:45:22.9030806Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_map:0, line 464 <- wrt source file 2025-03-17T18:45:22.9033031Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::tree_map:0 2025-03-17T18:45:22.9035305Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::broadcast_prefix:0, line 880 <- wrt source file 2025-03-17T18:45:22.9037877Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py::broadcast_prefix:0 2025-03-17T18:45:22.9040227Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::register_dataclass:0, line 268 <- wrt source file 2025-03-17T18:45:22.9042595Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::register_dataclass:0 2025-03-17T18:45:22.9044898Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::register_constant:0, line 328 <- wrt source file 2025-03-17T18:45:22.9047695Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::register_constant:0 2025-03-17T18:45:22.9049904Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::tree_map:0, line 1115 <- wrt source file 2025-03-17T18:45:22.9052060Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_pytree.py::tree_map:0 2025-03-17T18:45:22.9054523Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::rename_privateuse1_backend:0, line 69 <- wrt source file 2025-03-17T18:45:22.9057331Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::rename_privateuse1_backend:0 2025-03-17T18:45:22.9060228Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::generate_methods_for_privateuse1_backend:0, line 322 <- wrt source file 2025-03-17T18:45:22.9063290Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::generate_methods_for_privateuse1_backend:0 2025-03-17T18:45:22.9066089Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::_get_custom_mod_func:0, line 354 <- wrt source file 2025-03-17T18:45:22.9068823Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/backend_registration.py::_get_custom_mod_func:0 2025-03-17T18:45:22.9071495Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py::checkpoint_sequential:0, line 547 <- wrt source file 2025-03-17T18:45:22.9074036Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py::checkpoint_sequential:0 2025-03-17T18:45:22.9076542Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py::set_checkpoint_early_stop:0, line 749 <- wrt source file 2025-03-17T18:45:22.9079127Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py::set_checkpoint_early_stop:0 2025-03-17T18:45:22.9081470Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/dlpack.py::from_dlpack:0, line 72 <- wrt source file 2025-03-17T18:45:22.9083679Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/dlpack.py::from_dlpack:0 2025-03-17T18:45:22.9086170Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_sympy/functions.py::MinMaxBase._collapse_arguments:0, line 718 <- wrt source file 2025-03-17T18:45:22.9415329Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_sympy/functions.py::MinMaxBase._collapse_arguments:0 2025-03-17T18:45:22.9417926Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::IterableDataset:0, line 94 <- wrt source file 2025-03-17T18:45:22.9420393Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::IterableDataset:0 2025-03-17T18:45:22.9422926Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::StackDataset:0, line 219 <- wrt source file 2025-03-17T18:45:22.9425321Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::StackDataset:0 2025-03-17T18:45:22.9427717Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::random_split:0, line 441 <- wrt source file 2025-03-17T18:45:22.9430103Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/dataset.py::random_split:0 2025-03-17T18:45:22.9432425Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::Sampler:0, line 34 <- wrt source file 2025-03-17T18:45:22.9434692Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::Sampler:0 2025-03-17T18:45:22.9437215Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::WeightedRandomSampler:0, line 232 <- wrt source file 2025-03-17T18:45:22.9439824Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::WeightedRandomSampler:0 2025-03-17T18:45:22.9442269Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::BatchSampler:0, line 295 <- wrt source file 2025-03-17T18:45:22.9444655Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/sampler.py::BatchSampler:0 2025-03-17T18:45:22.9447044Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::default_convert:0, line 39 <- wrt source file 2025-03-17T18:45:22.9449590Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::default_convert:0 2025-03-17T18:45:22.9452025Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::collate:0, line 137 <- wrt source file 2025-03-17T18:45:22.9454422Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::collate:0 2025-03-17T18:45:22.9456947Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::default_collate:0, line 364 <- wrt source file 2025-03-17T18:45:22.9459501Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/_utils/collate.py::default_collate:0 2025-03-17T18:45:22.9462066Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/datapipe.py::IterDataPipe:0, line 97 <- wrt source file 2025-03-17T18:45:22.9464746Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/datapipe.py::IterDataPipe:0 2025-03-17T18:45:22.9467388Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/datapipe.py::MapDataPipe:0, line 264 <- wrt source file 2025-03-17T18:45:22.9470031Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/datapipe.py::MapDataPipe:0 2025-03-17T18:45:22.9472777Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/callable.py::MapperIterDataPipe:0, line 52 <- wrt source file 2025-03-17T18:45:22.9475700Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/callable.py::MapperIterDataPipe:0 2025-03-17T18:45:22.9478593Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/callable.py::CollatorIterDataPipe:0, line 198 <- wrt source file 2025-03-17T18:45:22.9481540Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/callable.py::CollatorIterDataPipe:0 2025-03-17T18:45:22.9484611Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combinatorics.py::ShufflerIterDataPipe:0, line 88 <- wrt source file 2025-03-17T18:45:22.9487704Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combinatorics.py::ShufflerIterDataPipe:0 2025-03-17T18:45:22.9490676Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ConcaterIterDataPipe:0, line 38 <- wrt source file 2025-03-17T18:45:22.9493723Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ConcaterIterDataPipe:0 2025-03-17T18:45:22.9496628Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ForkerIterDataPipe:0, line 88 <- wrt source file 2025-03-17T18:45:22.9499579Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ForkerIterDataPipe:0 2025-03-17T18:45:22.9502406Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::_ChildDataPipe:0, line 307 <- wrt source file 2025-03-17T18:45:22.9505231Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::_ChildDataPipe:0 2025-03-17T18:45:22.9508223Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::DemultiplexerIterDataPipe:0, line 393 <- wrt source file 2025-03-17T18:45:22.9511330Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::DemultiplexerIterDataPipe:0 2025-03-17T18:45:22.9514364Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::MultiplexerIterDataPipe:0, line 603 <- wrt source file 2025-03-17T18:45:22.9517424Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::MultiplexerIterDataPipe:0 2025-03-17T18:45:22.9520441Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ZipperIterDataPipe:0, line 671 <- wrt source file 2025-03-17T18:45:22.9523393Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/combining.py::ZipperIterDataPipe:0 2025-03-17T18:45:22.9526341Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/filelister.py::FileListerIterDataPipe:0, line 31 <- wrt source file 2025-03-17T18:45:22.9529397Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/filelister.py::FileListerIterDataPipe:0 2025-03-17T18:45:22.9532406Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/fileopener.py::FileOpenerIterDataPipe:0, line 35 <- wrt source file 2025-03-17T18:45:22.9535497Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/fileopener.py::FileOpenerIterDataPipe:0 2025-03-17T18:45:22.9538577Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::BatcherIterDataPipe:0, line 53 <- wrt source file 2025-03-17T18:45:22.9541536Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::BatcherIterDataPipe:0 2025-03-17T18:45:22.9544451Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::UnBatcherIterDataPipe:0, line 113 <- wrt source file 2025-03-17T18:45:22.9547603Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::UnBatcherIterDataPipe:0 2025-03-17T18:45:22.9550546Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::GrouperIterDataPipe:0, line 180 <- wrt source file 2025-03-17T18:45:22.9553488Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/grouping.py::GrouperIterDataPipe:0 2025-03-17T18:45:22.9556358Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/selecting.py::FilterIterDataPipe:0, line 37 <- wrt source file 2025-03-17T18:45:22.9559357Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/selecting.py::FilterIterDataPipe:0 2025-03-17T18:45:22.9562376Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/streamreader.py::StreamReaderIterDataPipe:0, line 25 <- wrt source file 2025-03-17T18:45:22.9565549Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/streamreader.py::StreamReaderIterDataPipe:0 2025-03-17T18:45:22.9568606Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/utils.py::IterableWrapperIterDataPipe:0, line 26 <- wrt source file 2025-03-17T18:45:22.9571647Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/iter/utils.py::IterableWrapperIterDataPipe:0 2025-03-17T18:45:22.9574549Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/callable.py::MapperMapDataPipe:0, line 35 <- wrt source file 2025-03-17T18:45:22.9577804Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/callable.py::MapperMapDataPipe:0 2025-03-17T18:45:22.9580741Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combinatorics.py::ShufflerIterDataPipe:0, line 34 <- wrt source file 2025-03-17T18:45:22.9583911Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combinatorics.py::ShufflerIterDataPipe:0 2025-03-17T18:45:22.9586914Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combining.py::ConcaterMapDataPipe:0, line 29 <- wrt source file 2025-03-17T18:45:22.9589868Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combining.py::ConcaterMapDataPipe:0 2025-03-17T18:45:22.9592738Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combining.py::ZipperMapDataPipe:0, line 73 <- wrt source file 2025-03-17T18:45:22.9595638Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/combining.py::ZipperMapDataPipe:0 2025-03-17T18:45:22.9598491Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/grouping.py::BatcherMapDataPipe:0, line 29 <- wrt source file 2025-03-17T18:45:22.9601390Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/grouping.py::BatcherMapDataPipe:0 2025-03-17T18:45:22.9604302Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/utils.py::SequenceWrapperMapDataPipe:0, line 26 <- wrt source file 2025-03-17T18:45:22.9607314Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/map/utils.py::SequenceWrapperMapDataPipe:0 2025-03-17T18:45:22.9610222Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/utils/common.py::validate_input_col:0, line 37 <- wrt source file 2025-03-17T18:45:22.9613067Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/utils/common.py::validate_input_col:0 2025-03-17T18:45:22.9615847Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/utils/decoder.py::basichandlers:0, line 47 <- wrt source file 2025-03-17T18:45:22.9618639Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/datapipes/utils/decoder.py::basichandlers:0 2025-03-17T18:45:22.9621331Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/hipify/hipify_python.py::find_closure_group:0, line 440 <- wrt source file 2025-03-17T18:45:22.9623974Z * SUCCESS: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/hipify/hipify_python.py::find_closure_group:0 2025-03-17T18:45:22.9626665Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/hipify/hipify_python.py::replace_extern_shared:0, line 536 <- wrt source file 2025-03-17T18:45:22.9629389Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/hipify/hipify_python.py::replace_extern_shared:0 2025-03-17T18:45:22.9632063Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.__init__:0, line 216 <- wrt source file 2025-03-17T18:45:22.9634775Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.__init__:0 2025-03-17T18:45:22.9637613Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_hparams:0, line 314 <- wrt source file 2025-03-17T18:45:22.9640423Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_hparams:0 2025-03-17T18:45:22.9643161Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalar:0, line 362 <- wrt source file 2025-03-17T18:45:22.9646064Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalar:0 2025-03-17T18:45:22.9648798Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalars:0, line 394 <- wrt source file 2025-03-17T18:45:22.9651592Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_scalars:0 2025-03-17T18:45:22.9654324Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_tensor:0, line 441 <- wrt source file 2025-03-17T18:45:22.9657101Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_tensor:0 2025-03-17T18:45:22.9659865Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram:0, line 480 <- wrt source file 2025-03-17T18:45:22.9662700Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram:0 2025-03-17T18:45:22.9665531Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram_raw:0, line 533 <- wrt source file 2025-03-17T18:45:22.9668503Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_histogram_raw:0 2025-03-17T18:45:22.9671345Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_image:0, line 599 <- wrt source file 2025-03-17T18:45:22.9674099Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_image:0 2025-03-17T18:45:22.9676818Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_images:0, line 648 <- wrt source file 2025-03-17T18:45:22.9679571Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_images:0 2025-03-17T18:45:22.9682355Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_text:0, line 811 <- wrt source file 2025-03-17T18:45:22.9685069Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_text:0 2025-03-17T18:45:22.9687811Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_embedding:0, line 878 <- wrt source file 2025-03-17T18:45:22.9690653Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_embedding:0 2025-03-17T18:45:22.9693420Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_pr_curve:0, line 989 <- wrt source file 2025-03-17T18:45:22.9696225Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_pr_curve:0 2025-03-17T18:45:22.9699212Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_multilinechart:0, line 1063 <- wrt source file 2025-03-17T18:45:22.9702457Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_multilinechart:0 2025-03-17T18:45:22.9705677Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_marginchart:0, line 1084 <- wrt source file 2025-03-17T18:45:22.9708912Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars_marginchart:0 2025-03-17T18:45:22.9711933Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars:0, line 1108 <- wrt source file 2025-03-17T18:45:22.9714879Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_custom_scalars:0 2025-03-17T18:45:22.9717686Z * DOCTEST : /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_mesh:0, line 1154 <- wrt source file 2025-03-17T18:45:22.9720431Z * SKIPPED: /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/tensorboard/writer.py::SummaryWriter.add_mesh:0 2025-03-17T18:45:22.9721867Z ============ 2025-03-17T18:45:22.9722369Z Finished doctests 2025-03-17T18:45:22.9722801Z 342 / 709 passed 2025-03-17T18:45:22.9723223Z  2025-03-17T18:45:22.9723781Z === Found 116 parse-time warnings === 2025-03-17T18:45:22.9724551Z --- Parse Warning: 1 / 116 --- 2025-03-17T18:45:22.9726620Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Tensor.dim_order in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py line=1507. 2025-03-17T18:45:22.9728998Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:22.9729869Z 2025-03-17T18:45:22.9730305Z dim_order(ambiguity_check=False) -> tuple 2025-03-17T18:45:22.9730893Z 2025-03-17T18:45:22.9731528Z Returns the uniquely determined tuple of int describing the dim order or 2025-03-17T18:45:22.9732397Z physical layout of :attr:`self`. 2025-03-17T18:45:22.9732948Z 2025-03-17T18:45:22.9733634Z The dim order represents how dimensions are laid out in memory of dense tensors, 2025-03-17T18:45:22.9734673Z starting from the outermost to the innermost dimension. 2025-03-17T18:45:22.9735360Z 2025-03-17T18:45:22.9735949Z Note that the dim order may not always be uniquely determined. 2025-03-17T18:45:22.9737388Z If `ambiguity_check` is True, this function raises a RuntimeError when the dim order cannot be uniquely determined; 2025-03-17T18:45:22.9739114Z If `ambiguity_check` is a list of memory formats, this function raises a RuntimeError when tensor can not be interpreted 2025-03-17T18:45:22.9740583Z into exactly one of the given memory formats, or it cannot be uniquely determined. 2025-03-17T18:45:22.9741907Z If `ambiguity_check` is False, it will return one of legal dim order(s) without checking its uniqueness. 2025-03-17T18:45:22.9742952Z Otherwise, it will raise TypeError. 2025-03-17T18:45:22.9743512Z 2025-03-17T18:45:22.9743883Z Args: 2025-03-17T18:45:22.9744706Z ambiguity_check (bool or List[torch.memory_format]): The check method for ambiguity of dim order. 2025-03-17T18:45:22.9745677Z 2025-03-17T18:45:22.9746052Z Examples:: 2025-03-17T18:45:22.9746501Z 2025-03-17T18:45:22.9746919Z >>> torch.empty((2, 3, 5, 7)).dim_order() 2025-03-17T18:45:22.9747503Z (0, 1, 2, 3) 2025-03-17T18:45:22.9748051Z >>> torch.empty((2, 3, 5, 7)).transpose(1, 2).dim_order() 2025-03-17T18:45:22.9748712Z (0, 2, 1, 3) 2025-03-17T18:45:22.9749359Z >>> torch.empty((2, 3, 5, 7), memory_format=torch.channels_last).dim_order() 2025-03-17T18:45:22.9750133Z (0, 2, 3, 1) 2025-03-17T18:45:22.9750606Z >>> torch.empty((1, 2, 3, 4)).dim_order() 2025-03-17T18:45:22.9751182Z (0, 1, 2, 3) 2025-03-17T18:45:22.9751612Z >>> try: 2025-03-17T18:45:22.9752167Z ... torch.empty((1, 2, 3, 4)).dim_order(ambiguity_check=True) 2025-03-17T18:45:22.9753047Z ... except RuntimeError as e: 2025-03-17T18:45:22.9753612Z ... print(e) 2025-03-17T18:45:22.9754499Z The tensor does not have unique dim order, or cannot map to exact one of the given memory formats. 2025-03-17T18:45:22.9755525Z >>> torch.empty((1, 2, 3, 4)).dim_order( 2025-03-17T18:45:22.9756332Z ... ambiguity_check=[torch.contiguous_format, torch.channels_last] 2025-03-17T18:45:22.9757195Z ... ) # It can be mapped to contiguous format 2025-03-17T18:45:22.9757811Z (0, 1, 2, 3) 2025-03-17T18:45:22.9758244Z >>> try: 2025-03-17T18:45:22.9758837Z ... torch.empty((1, 2, 3, 4)).dim_order(ambiguity_check="ILLEGAL") 2025-03-17T18:45:22.9759597Z ... except TypeError as e: 2025-03-17T18:45:22.9760140Z ... print(e) 2025-03-17T18:45:22.9760876Z The ambiguity_check argument must be a bool or a list of memory formats. 2025-03-17T18:45:22.9761693Z 2025-03-17T18:45:22.9762063Z .. warning:: 2025-03-17T18:45:22.9762688Z The dim_order tensor API is experimental and subject to change. 2025-03-17T18:45:22.9763429Z 2025-03-17T18:45:22.9764114Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:22.9764969Z 2025-03-17T18:45:22.9765359Z warnings.warn(msg) 2025-03-17T18:45:22.9765820Z 2025-03-17T18:45:22.9766412Z --- Parse Warning: 2 / 116 --- 2025-03-17T18:45:22.9768429Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=meshgrid in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py line=446. 2025-03-17T18:45:22.9770790Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:22.9771960Z Creates grids of coordinates specified by the 1D inputs in `attr`:tensors. 2025-03-17T18:45:22.9772777Z 2025-03-17T18:45:22.9773309Z This is helpful when you want to visualize data over some 2025-03-17T18:45:22.9774136Z range of inputs. See below for a plotting example. 2025-03-17T18:45:22.9774783Z 2025-03-17T18:45:22.9775281Z Given :math:`N` 1D tensors :math:`T_0 \ldots T_{N-1}` as 2025-03-17T18:45:22.9776140Z inputs with corresponding sizes :math:`S_0 \ldots S_{N-1}`, 2025-03-17T18:45:22.9777111Z this creates :math:`N` N-dimensional tensors :math:`G_0 \ldots 2025-03-17T18:45:22.9777968Z G_{N-1}`, each with shape :math:`(S_0, ..., S_{N-1})` where 2025-03-17T18:45:22.9778819Z the output :math:`G_i` is constructed by expanding :math:`T_i` 2025-03-17T18:45:22.9779584Z to the result shape. 2025-03-17T18:45:22.9780092Z 2025-03-17T18:45:22.9780465Z .. note:: 2025-03-17T18:45:22.9781029Z 0D inputs are treated equivalently to 1D inputs of a 2025-03-17T18:45:22.9781723Z single element. 2025-03-17T18:45:22.9782207Z 2025-03-17T18:45:22.9782591Z .. warning:: 2025-03-17T18:45:22.9783223Z `torch.meshgrid(*tensors)` currently has the same behavior 2025-03-17T18:45:22.9784086Z as calling `numpy.meshgrid(*arrays, indexing='ij')`. 2025-03-17T18:45:22.9784758Z 2025-03-17T18:45:22.9785224Z In the future `torch.meshgrid` will transition to 2025-03-17T18:45:22.9785936Z `indexing='xy'` as the default. 2025-03-17T18:45:22.9786591Z 2025-03-17T18:45:22.9787142Z https://github.com/pytorch/pytorch/issues/50276 tracks 2025-03-17T18:45:22.9788027Z this issue with the goal of migrating to NumPy's behavior. 2025-03-17T18:45:22.9788745Z 2025-03-17T18:45:22.9789126Z .. seealso:: 2025-03-17T18:45:22.9789575Z 2025-03-17T18:45:22.9790092Z :func:`torch.cartesian_prod` has the same effect but it 2025-03-17T18:45:22.9790881Z collects the data in a tensor of vectors. 2025-03-17T18:45:22.9791492Z 2025-03-17T18:45:22.9791948Z Args: 2025-03-17T18:45:22.9792691Z tensors (list of Tensor): list of scalars or 1 dimensional tensors. Scalars will be 2025-03-17T18:45:22.9793721Z treated as tensors of size :math:`(1,)` automatically 2025-03-17T18:45:22.9794416Z 2025-03-17T18:45:22.9794946Z indexing: (str, optional): the indexing mode, either "xy" 2025-03-17T18:45:22.9795798Z or "ij", defaults to "ij". See warning for future changes. 2025-03-17T18:45:22.9796489Z 2025-03-17T18:45:22.9796958Z If "xy" is selected, the first dimension corresponds 2025-03-17T18:45:22.9797764Z to the cardinality of the second input and the second 2025-03-17T18:45:22.9798607Z dimension corresponds to the cardinality of the first 2025-03-17T18:45:22.9799314Z input. 2025-03-17T18:45:22.9799763Z 2025-03-17T18:45:22.9800246Z If "ij" is selected, the dimensions are in the same 2025-03-17T18:45:22.9800979Z order as the cardinality of the inputs. 2025-03-17T18:45:22.9801581Z 2025-03-17T18:45:22.9801943Z Returns: 2025-03-17T18:45:22.9802510Z seq (sequence of Tensors): If the input has :math:`N` 2025-03-17T18:45:22.9803317Z tensors of size :math:`S_0 \ldots S_{N-1}``, then the 2025-03-17T18:45:22.9804153Z output will also have :math:`N` tensors, where each tensor 2025-03-17T18:45:22.9804933Z is of shape :math:`(S_0, ..., S_{N-1})`. 2025-03-17T18:45:22.9805519Z 2025-03-17T18:45:22.9805942Z Example:: 2025-03-17T18:45:22.9806368Z 2025-03-17T18:45:22.9806781Z >>> x = torch.tensor([1, 2, 3]) 2025-03-17T18:45:22.9807376Z >>> y = torch.tensor([4, 5, 6]) 2025-03-17T18:45:22.9807943Z 2025-03-17T18:45:22.9808500Z Observe the element-wise pairings across the grid, (1, 4), 2025-03-17T18:45:22.9809321Z (1, 5), ..., (3, 6). This is the same thing as the 2025-03-17T18:45:22.9809966Z cartesian product. 2025-03-17T18:45:22.9810615Z >>> grid_x, grid_y = torch.meshgrid(x, y, indexing='ij') 2025-03-17T18:45:22.9811303Z >>> grid_x 2025-03-17T18:45:22.9811828Z tensor([[1, 1, 1], 2025-03-17T18:45:22.9812350Z [2, 2, 2], 2025-03-17T18:45:22.9812866Z [3, 3, 3]]) 2025-03-17T18:45:22.9813388Z >>> grid_y 2025-03-17T18:45:22.9813871Z tensor([[4, 5, 6], 2025-03-17T18:45:22.9814395Z [4, 5, 6], 2025-03-17T18:45:22.9814909Z [4, 5, 6]]) 2025-03-17T18:45:22.9815415Z 2025-03-17T18:45:22.9815926Z This correspondence can be seen when these grids are 2025-03-17T18:45:22.9816643Z stacked properly. 2025-03-17T18:45:22.9817369Z >>> torch.equal(torch.cat(tuple(torch.dstack([grid_x, grid_y]))), 2025-03-17T18:45:22.9818193Z ... torch.cartesian_prod(x, y)) 2025-03-17T18:45:22.9818811Z True 2025-03-17T18:45:22.9819232Z 2025-03-17T18:45:22.9819761Z `torch.meshgrid` is commonly used to produce a grid for 2025-03-17T18:45:22.9820488Z plotting. 2025-03-17T18:45:22.9821037Z >>> # xdoctest: +REQUIRES(module:matplotlib) 2025-03-17T18:45:22.9821728Z >>> # xdoctest: +REQUIRES(env:DOCTEST_SHOW) 2025-03-17T18:45:22.9822409Z >>> import matplotlib.pyplot as plt 2025-03-17T18:45:22.9823089Z >>> xs = torch.linspace(-5, 5, steps=100) 2025-03-17T18:45:22.9823757Z >>> ys = torch.linspace(-5, 5, steps=100) 2025-03-17T18:45:22.9824447Z >>> x, y = torch.meshgrid(xs, ys, indexing='xy') 2025-03-17T18:45:22.9825144Z >>> z = torch.sin(torch.sqrt(x * x + y * y)) 2025-03-17T18:45:22.9825912Z >>> ax = plt.axes(projection='3d') 2025-03-17T18:45:22.9826701Z >>> ax.plot_surface(x.numpy(), y.numpy(), z.numpy()) 2025-03-17T18:45:22.9827378Z >>> plt.show() 2025-03-17T18:45:22.9827855Z 2025-03-17T18:45:22.9828280Z .. image:: ../_static/img/meshgrid.png 2025-03-17T18:45:22.9828890Z :width: 512 2025-03-17T18:45:22.9829349Z 2025-03-17T18:45:22.9829698Z 2025-03-17T18:45:22.9830405Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:22.9831257Z 2025-03-17T18:45:22.9831641Z warnings.warn(msg) 2025-03-17T18:45:22.9832105Z 2025-03-17T18:45:22.9832670Z --- Parse Warning: 3 / 116 --- 2025-03-17T18:45:22.9834712Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_unique_impl in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/functional.py line=842. 2025-03-17T18:45:22.9837169Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:22.9838620Z unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None) -> tuple[Tensor, Tensor, Tensor] 2025-03-17T18:45:22.9839712Z 2025-03-17T18:45:22.9840197Z Returns the unique elements of the input tensor. 2025-03-17T18:45:22.9840837Z 2025-03-17T18:45:22.9841596Z .. note:: This function is different from :func:`torch.unique_consecutive` in the sense that 2025-03-17T18:45:22.9842763Z this function also eliminates non-consecutive duplicate values. 2025-03-17T18:45:22.9843612Z 2025-03-17T18:45:22.9844247Z .. note:: Currently in the CUDA implementation and the CPU implementation, 2025-03-17T18:45:22.9845448Z `torch.unique` always sort the tensor at the beginning regardless of the `sort` argument. 2025-03-17T18:45:22.9846751Z Sorting could be slow, so if your input tensor is already sorted, it is recommended to use 2025-03-17T18:45:22.9847848Z :func:`torch.unique_consecutive` which avoids the sorting. 2025-03-17T18:45:22.9848561Z 2025-03-17T18:45:22.9848931Z Args: 2025-03-17T18:45:22.9849387Z input (Tensor): the input tensor 2025-03-17T18:45:22.9850180Z sorted (bool): Whether to sort the unique elements in ascending order 2025-03-17T18:45:22.9851065Z before returning as output. 2025-03-17T18:45:22.9851865Z return_inverse (bool): Whether to also return the indices for where 2025-03-17T18:45:22.9852888Z elements in the original input ended up in the returned unique list. 2025-03-17T18:45:22.9853955Z return_counts (bool): Whether to also return the counts for each unique 2025-03-17T18:45:22.9854775Z element. 2025-03-17T18:45:22.9855451Z dim (int, optional): the dimension to operate upon. If ``None``, the 2025-03-17T18:45:22.9856442Z unique of the flattened input is returned. Otherwise, each of the 2025-03-17T18:45:22.9857441Z tensors indexed by the given dimension is treated as one of the 2025-03-17T18:45:22.9858439Z elements to apply the unique operation upon. See examples for more 2025-03-17T18:45:22.9859266Z details. Default: ``None`` 2025-03-17T18:45:22.9859811Z 2025-03-17T18:45:22.9860181Z Returns: 2025-03-17T18:45:22.9860970Z (Tensor, Tensor (optional), Tensor (optional)): A tensor or a tuple of tensors containing 2025-03-17T18:45:22.9861884Z 2025-03-17T18:45:22.9862445Z - **output** (*Tensor*): the output list of unique scalar elements. 2025-03-17T18:45:22.9863280Z - **inverse_indices** (*Tensor*): (optional) if 2025-03-17T18:45:22.9864079Z :attr:`return_inverse` is True, there will be an additional 2025-03-17T18:45:22.9865011Z returned tensor (same shape as input) representing the indices 2025-03-17T18:45:22.9866070Z for where elements in the original input map to in the output; 2025-03-17T18:45:22.9867084Z otherwise, this function will only return a single tensor. 2025-03-17T18:45:22.9867883Z - **counts** (*Tensor*): (optional) if 2025-03-17T18:45:22.9868639Z :attr:`return_counts` is True, there will be an additional 2025-03-17T18:45:22.9869540Z returned tensor (same shape as output or output.size(dim), 2025-03-17T18:45:22.9870467Z if dim was specified) representing the number of occurrences 2025-03-17T18:45:22.9871247Z for each unique value or tensor. 2025-03-17T18:45:22.9871834Z 2025-03-17T18:45:22.9872208Z Example:: 2025-03-17T18:45:22.9872614Z 2025-03-17T18:45:22.9873208Z >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long)) 2025-03-17T18:45:22.9873994Z >>> output 2025-03-17T18:45:22.9874450Z tensor([1, 2, 3]) 2025-03-17T18:45:22.9874922Z 2025-03-17T18:45:22.9875367Z >>> output, inverse_indices = torch.unique( 2025-03-17T18:45:22.9876264Z ... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True) 2025-03-17T18:45:22.9877092Z >>> output 2025-03-17T18:45:22.9877546Z tensor([1, 2, 3]) 2025-03-17T18:45:22.9878054Z >>> inverse_indices 2025-03-17T18:45:22.9878565Z tensor([0, 2, 1, 2]) 2025-03-17T18:45:22.9879058Z 2025-03-17T18:45:22.9879500Z >>> output, inverse_indices = torch.unique( 2025-03-17T18:45:22.9880382Z ... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True) 2025-03-17T18:45:22.9881243Z >>> output 2025-03-17T18:45:22.9881696Z tensor([1, 2, 3]) 2025-03-17T18:45:22.9882207Z >>> inverse_indices 2025-03-17T18:45:22.9882719Z tensor([[0, 2], 2025-03-17T18:45:22.9883200Z [1, 2]]) 2025-03-17T18:45:22.9883668Z 2025-03-17T18:45:22.9884060Z >>> a = torch.tensor([ 2025-03-17T18:45:22.9884585Z ... [ 2025-03-17T18:45:22.9885029Z ... [1, 1, 0, 0], 2025-03-17T18:45:22.9885566Z ... [1, 1, 0, 0], 2025-03-17T18:45:22.9886105Z ... [0, 0, 1, 1], 2025-03-17T18:45:22.9886672Z ... ], 2025-03-17T18:45:22.9887099Z ... [ 2025-03-17T18:45:22.9887541Z ... [0, 0, 1, 1], 2025-03-17T18:45:22.9888074Z ... [0, 0, 1, 1], 2025-03-17T18:45:22.9888587Z ... [1, 1, 1, 1], 2025-03-17T18:45:22.9889103Z ... ], 2025-03-17T18:45:22.9889537Z ... [ 2025-03-17T18:45:22.9889972Z ... [1, 1, 0, 0], 2025-03-17T18:45:22.9890497Z ... [1, 1, 0, 0], 2025-03-17T18:45:22.9891020Z ... [0, 0, 1, 1], 2025-03-17T18:45:22.9891539Z ... ], 2025-03-17T18:45:22.9891967Z ... ]) 2025-03-17T18:45:22.9892377Z 2025-03-17T18:45:22.9892973Z >>> # If we call `torch.unique(a, dim=0)`, each of the tensors `a[idx, :, :]` 2025-03-17T18:45:22.9893973Z >>> # will be compared. We can see that `a[0, :, :]` and `a[2, :, :]` match 2025-03-17T18:45:22.9894828Z >>> # each other, so one of them will be removed. 2025-03-17T18:45:22.9895480Z >>> (a[0, :, :] == a[2, :, :]).all() 2025-03-17T18:45:22.9896047Z tensor(True) 2025-03-17T18:45:22.9896569Z >>> a_unique_dim0 = torch.unique(a, dim=0) 2025-03-17T18:45:22.9897188Z >>> a_unique_dim0 2025-03-17T18:45:22.9897689Z tensor([[[0, 0, 1, 1], 2025-03-17T18:45:22.9898200Z [0, 0, 1, 1], 2025-03-17T18:45:22.9898712Z [1, 1, 1, 1]], 2025-03-17T18:45:22.9899233Z [[1, 1, 0, 0], 2025-03-17T18:45:22.9899744Z [1, 1, 0, 0], 2025-03-17T18:45:22.9900260Z [0, 0, 1, 1]]]) 2025-03-17T18:45:22.9900845Z 2025-03-17T18:45:22.9901451Z >>> # Notice which sub-tensors from `a` match with the sub-tensors from 2025-03-17T18:45:22.9902257Z >>> # `a_unique_dim0`: 2025-03-17T18:45:22.9902842Z >>> (a_unique_dim0[0, :, :] == a[1, :, :]).all() 2025-03-17T18:45:22.9903454Z tensor(True) 2025-03-17T18:45:22.9903982Z >>> (a_unique_dim0[1, :, :] == a[0, :, :]).all() 2025-03-17T18:45:22.9904597Z tensor(True) 2025-03-17T18:45:22.9905049Z 2025-03-17T18:45:22.9905621Z >>> # For `torch.unique(a, dim=1)`, each of the tensors `a[:, idx, :]` are 2025-03-17T18:45:22.9906645Z >>> # compared. `a[:, 0, :]` and `a[:, 1, :]` match each other, so one of 2025-03-17T18:45:22.9907410Z >>> # them will be removed. 2025-03-17T18:45:22.9907993Z >>> (a[:, 0, :] == a[:, 1, :]).all() 2025-03-17T18:45:22.9908566Z tensor(True) 2025-03-17T18:45:22.9909048Z >>> torch.unique(a, dim=1) 2025-03-17T18:45:22.9909632Z tensor([[[0, 0, 1, 1], 2025-03-17T18:45:22.9910157Z [1, 1, 0, 0]], 2025-03-17T18:45:22.9910680Z [[1, 1, 1, 1], 2025-03-17T18:45:22.9911186Z [0, 0, 1, 1]], 2025-03-17T18:45:22.9911702Z [[0, 0, 1, 1], 2025-03-17T18:45:22.9912207Z [1, 1, 0, 0]]]) 2025-03-17T18:45:22.9912709Z 2025-03-17T18:45:22.9913296Z >>> # For `torch.unique(a, dim=2)`, the tensors `a[:, :, idx]` are compared. 2025-03-17T18:45:22.9914189Z >>> # `a[:, :, 0]` and `a[:, :, 1]` match each other. Also, `a[:, :, 2]` and 2025-03-17T18:45:22.9915055Z >>> # `a[:, :, 3]` match each other as well. So in this case, two of the 2025-03-17T18:45:22.9915807Z >>> # sub-tensors will be removed. 2025-03-17T18:45:22.9916411Z >>> (a[:, :, 0] == a[:, :, 1]).all() 2025-03-17T18:45:22.9916974Z tensor(True) 2025-03-17T18:45:22.9917462Z >>> (a[:, :, 2] == a[:, :, 3]).all() 2025-03-17T18:45:22.9918028Z tensor(True) 2025-03-17T18:45:22.9918516Z >>> torch.unique(a, dim=2) 2025-03-17T18:45:22.9919075Z tensor([[[0, 1], 2025-03-17T18:45:22.9919547Z [0, 1], 2025-03-17T18:45:22.9920060Z [1, 0]], 2025-03-17T18:45:22.9920549Z [[1, 0], 2025-03-17T18:45:22.9921024Z [1, 0], 2025-03-17T18:45:22.9921506Z [1, 1]], 2025-03-17T18:45:22.9921989Z [[0, 1], 2025-03-17T18:45:22.9922457Z [0, 1], 2025-03-17T18:45:22.9922943Z [1, 0]]]) 2025-03-17T18:45:22.9923422Z 2025-03-17T18:45:22.9924109Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:22.9924968Z 2025-03-17T18:45:22.9925356Z warnings.warn(msg) 2025-03-17T18:45:22.9925819Z 2025-03-17T18:45:22.9926395Z --- Parse Warning: 4 / 116 --- 2025-03-17T18:45:22.9928342Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=560. 2025-03-17T18:45:22.9930547Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:22.9931435Z 2025-03-17T18:45:22.9931915Z Load a model from a github repo or a local directory. 2025-03-17T18:45:22.9932582Z 2025-03-17T18:45:22.9933211Z Note: Loading a model is the typical use case, but this can also be used to 2025-03-17T18:45:22.9934262Z for loading other objects such as tokenizers, loss functions, etc. 2025-03-17T18:45:22.9935040Z 2025-03-17T18:45:22.9935554Z If ``source`` is 'github', ``repo_or_dir`` is expected to be 2025-03-17T18:45:22.9936384Z of the form ``repo_owner/repo_name[:ref]`` with an optional 2025-03-17T18:45:22.9937274Z ref (a tag or a branch). 2025-03-17T18:45:22.9937759Z 2025-03-17T18:45:22.9938410Z If ``source`` is 'local', ``repo_or_dir`` is expected to be a 2025-03-17T18:45:22.9939143Z path to a local directory. 2025-03-17T18:45:22.9939643Z 2025-03-17T18:45:22.9940013Z Args: 2025-03-17T18:45:22.9940481Z repo_or_dir (str): If ``source`` is 'github', 2025-03-17T18:45:22.9941491Z this should correspond to a github repo with format ``repo_owner/repo_name[:ref]`` with 2025-03-17T18:45:22.9942824Z an optional ref (tag or branch), for example 'pytorch/vision:0.10'. If ``ref`` is not specified, 2025-03-17T18:45:22.9944094Z the default branch is assumed to be ``main`` if it exists, and otherwise ``master``. 2025-03-17T18:45:22.9945180Z If ``source`` is 'local' then it should be a path to a local directory. 2025-03-17T18:45:22.9946134Z model (str): the name of a callable (entrypoint) defined in the 2025-03-17T18:45:22.9946965Z repo/dir's ``hubconf.py``. 2025-03-17T18:45:22.9947719Z *args (optional): the corresponding args for callable ``model``. 2025-03-17T18:45:22.9948618Z source (str, optional): 'github' or 'local'. Specifies how 2025-03-17T18:45:22.9949470Z ``repo_or_dir`` is to be interpreted. Default is 'github'. 2025-03-17T18:45:22.9950389Z trust_repo (bool, str or None): ``"check"``, ``True``, ``False`` or ``None``. 2025-03-17T18:45:22.9951442Z This parameter was introduced in v1.12 and helps ensuring that users 2025-03-17T18:45:22.9952329Z only run code from repos that they trust. 2025-03-17T18:45:22.9952934Z 2025-03-17T18:45:22.9953539Z - If ``False``, a prompt will ask the user whether the repo should 2025-03-17T18:45:22.9954290Z be trusted. 2025-03-17T18:45:22.9954933Z - If ``True``, the repo will be added to the trusted list and loaded 2025-03-17T18:45:22.9955756Z without requiring explicit confirmation. 2025-03-17T18:45:22.9956528Z - If ``"check"``, the repo will be checked against the list of 2025-03-17T18:45:22.9957454Z trusted repos in the cache. If it is not present in that list, the 2025-03-17T18:45:22.9958429Z behaviour will fall back onto the ``trust_repo=False`` option. 2025-03-17T18:45:22.9959362Z - If ``None``: this will raise a warning, inviting the user to set 2025-03-17T18:45:22.9960348Z ``trust_repo`` to either ``False``, ``True`` or ``"check"``. This 2025-03-17T18:45:22.9961303Z is only present for backward compatibility and will be removed in 2025-03-17T18:45:22.9962085Z v2.0. 2025-03-17T18:45:22.9962504Z 2025-03-17T18:45:22.9963098Z Default is ``None`` and will eventually change to ``"check"`` in v2.0. 2025-03-17T18:45:22.9964109Z force_reload (bool, optional): whether to force a fresh download of 2025-03-17T18:45:22.9965083Z the github repo unconditionally. Does not have any effect if 2025-03-17T18:45:22.9965902Z ``source = 'local'``. Default is ``False``. 2025-03-17T18:45:22.9966718Z verbose (bool, optional): If ``False``, mute messages about hitting 2025-03-17T18:45:22.9967717Z local caches. Note that the message about first download cannot be 2025-03-17T18:45:22.9968631Z muted. Does not have any effect if ``source = 'local'``. 2025-03-17T18:45:22.9969355Z Default is ``True``. 2025-03-17T18:45:22.9970248Z skip_validation (bool, optional): if ``False``, torchhub will check that the branch or commit 2025-03-17T18:45:22.9971563Z specified by the ``github`` argument properly belongs to the repo owner. This will make 2025-03-17T18:45:22.9972840Z requests to the GitHub API; you can specify a non-default GitHub token by setting the 2025-03-17T18:45:22.9973910Z ``GITHUB_TOKEN`` environment variable. Default is ``False``. 2025-03-17T18:45:22.9974869Z **kwargs (optional): the corresponding kwargs for callable ``model``. 2025-03-17T18:45:22.9975711Z 2025-03-17T18:45:22.9976077Z Returns: 2025-03-17T18:45:22.9976652Z The output of the ``model`` callable when called with the given 2025-03-17T18:45:22.9977406Z ``*args`` and ``**kwargs``. 2025-03-17T18:45:22.9977932Z 2025-03-17T18:45:22.9978300Z Example: 2025-03-17T18:45:22.9978787Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:22.9979427Z >>> # from a github repo 2025-03-17T18:45:22.9979967Z >>> repo = "pytorch/vision" 2025-03-17T18:45:22.9980532Z >>> model = torch.hub.load( 2025-03-17T18:45:22.9981243Z ... repo, "resnet50", weights="ResNet50_Weights.IMAGENET1K_V1" 2025-03-17T18:45:22.9981963Z ... ) 2025-03-17T18:45:22.9982383Z >>> # from a local directory 2025-03-17T18:45:22.9982978Z >>> path = "/some/local/path/pytorch/vision" 2025-03-17T18:45:22.9983611Z >>> # xdoctest: +SKIP 2025-03-17T18:45:22.9984389Z >>> model = torch.hub.load(path, "resnet50", weights="ResNet50_Weights.DEFAULT") 2025-03-17T18:45:22.9985223Z 2025-03-17T18:45:22.9985902Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:22.9986828Z 2025-03-17T18:45:22.9987218Z warnings.warn(msg) 2025-03-17T18:45:22.9987687Z 2025-03-17T18:45:22.9988261Z --- Parse Warning: 5 / 116 --- 2025-03-17T18:45:22.9990244Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_load_local in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=652. 2025-03-17T18:45:22.9992534Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:22.9993411Z 2025-03-17T18:45:22.9993943Z Load a model from a local directory with a ``hubconf.py``. 2025-03-17T18:45:22.9994636Z 2025-03-17T18:45:22.9994999Z Args: 2025-03-17T18:45:22.9995573Z hubconf_dir (str): path to a local directory that contains a 2025-03-17T18:45:22.9996320Z ``hubconf.py``. 2025-03-17T18:45:22.9996980Z model (str): name of an entrypoint defined in the directory's 2025-03-17T18:45:22.9997715Z ``hubconf.py``. 2025-03-17T18:45:22.9998384Z *args (optional): the corresponding args for callable ``model``. 2025-03-17T18:45:22.9999409Z **kwargs (optional): the corresponding kwargs for callable ``model``. 2025-03-17T18:45:23.0000193Z 2025-03-17T18:45:23.0000557Z Returns: 2025-03-17T18:45:23.0001102Z a single model with corresponding pretrained weights. 2025-03-17T18:45:23.0001787Z 2025-03-17T18:45:23.0002158Z Example: 2025-03-17T18:45:23.0002619Z >>> # xdoctest: +SKIP("stub local path") 2025-03-17T18:45:23.0003287Z >>> path = "/some/local/path/pytorch/vision" 2025-03-17T18:45:23.0003920Z >>> model = _load_local( 2025-03-17T18:45:23.0004432Z ... path, 2025-03-17T18:45:23.0004879Z ... "resnet50", 2025-03-17T18:45:23.0005451Z ... weights="ResNet50_Weights.IMAGENET1K_V1", 2025-03-17T18:45:23.0006086Z ... ) 2025-03-17T18:45:23.0006475Z 2025-03-17T18:45:23.0007156Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0008005Z 2025-03-17T18:45:23.0008401Z warnings.warn(msg) 2025-03-17T18:45:23.0008858Z 2025-03-17T18:45:23.0009398Z --- Parse Warning: 6 / 116 --- 2025-03-17T18:45:23.0011442Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=download_url_to_file in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=691. 2025-03-17T18:45:23.0013754Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0014751Z Download object at the given URL to a local path. 2025-03-17T18:45:23.0015393Z 2025-03-17T18:45:23.0015754Z Args: 2025-03-17T18:45:23.0016291Z url (str): URL of the object to download 2025-03-17T18:45:23.0017170Z dst (str): Full path where object will be saved, e.g. ``/tmp/temporary_file`` 2025-03-17T18:45:23.0018449Z hash_prefix (str, optional): If not None, the SHA256 downloaded file should start with ``hash_prefix``. 2025-03-17T18:45:23.0019479Z Default: None 2025-03-17T18:45:23.0020283Z progress (bool, optional): whether or not to display a progress bar to stderr 2025-03-17T18:45:23.0021162Z Default: True 2025-03-17T18:45:23.0021642Z 2025-03-17T18:45:23.0022008Z Example: 2025-03-17T18:45:23.0022511Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:23.0023188Z >>> # xdoctest: +REQUIRES(POSIX) 2025-03-17T18:45:23.0023815Z >>> torch.hub.download_url_to_file( 2025-03-17T18:45:23.0024626Z ... "https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth", 2025-03-17T18:45:23.0025460Z ... "/tmp/temporary_file", 2025-03-17T18:45:23.0026022Z ... ) 2025-03-17T18:45:23.0026485Z 2025-03-17T18:45:23.0026862Z 2025-03-17T18:45:23.0027556Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0028419Z 2025-03-17T18:45:23.0028842Z warnings.warn(msg) 2025-03-17T18:45:23.0029299Z 2025-03-17T18:45:23.0029848Z --- Parse Warning: 7 / 116 --- 2025-03-17T18:45:23.0031918Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load_state_dict_from_url in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/hub.py line=816. 2025-03-17T18:45:23.0034334Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0035343Z Loads the Torch serialized object at the given URL. 2025-03-17T18:45:23.0035994Z 2025-03-17T18:45:23.0036538Z If downloaded file is a zip file, it will be automatically 2025-03-17T18:45:23.0037397Z decompressed. 2025-03-17T18:45:23.0037849Z 2025-03-17T18:45:23.0038453Z If the object is already present in `model_dir`, it's deserialized and 2025-03-17T18:45:23.0039240Z returned. 2025-03-17T18:45:23.0039875Z The default value of ``model_dir`` is ``/checkpoints`` where 2025-03-17T18:45:23.0040935Z ``hub_dir`` is the directory returned by :func:`~torch.hub.get_dir`. 2025-03-17T18:45:23.0041675Z 2025-03-17T18:45:23.0042041Z Args: 2025-03-17T18:45:23.0042507Z url (str): URL of the object to download 2025-03-17T18:45:23.0043314Z model_dir (str, optional): directory in which to save the object 2025-03-17T18:45:23.0044557Z map_location (optional): a function or a dict specifying how to remap storage locations (see torch.load) 2025-03-17T18:45:23.0045904Z progress (bool, optional): whether or not to display a progress bar to stderr. 2025-03-17T18:45:23.0046786Z Default: True 2025-03-17T18:45:23.0047722Z check_hash(bool, optional): If True, the filename part of the URL should follow the naming convention 2025-03-17T18:45:23.0048959Z ``filename-.ext`` where ```` is the first eight or more 2025-03-17T18:45:23.0050012Z digits of the SHA256 hash of the contents of the file. The hash is used to 2025-03-17T18:45:23.0051010Z ensure unique names and to verify the contents of the file. 2025-03-17T18:45:23.0051766Z Default: False 2025-03-17T18:45:23.0052730Z file_name (str, optional): name for the downloaded file. Filename from ``url`` will be used if not set. 2025-03-17T18:45:23.0054195Z weights_only(bool, optional): If True, only weights will be loaded and no complex pickled objects. 2025-03-17T18:45:23.0055508Z Recommended for untrusted sources. See :func:`~torch.load` for more details. 2025-03-17T18:45:23.0056372Z 2025-03-17T18:45:23.0056838Z Example: 2025-03-17T18:45:23.0057343Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_HUB) 2025-03-17T18:45:23.0058085Z >>> state_dict = torch.hub.load_state_dict_from_url( 2025-03-17T18:45:23.0058975Z ... "https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth" 2025-03-17T18:45:23.0059759Z ... ) 2025-03-17T18:45:23.0090617Z 2025-03-17T18:45:23.0091020Z 2025-03-17T18:45:23.0091728Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0092591Z 2025-03-17T18:45:23.0092997Z warnings.warn(msg) 2025-03-17T18:45:23.0093463Z 2025-03-17T18:45:23.0094062Z --- Parse Warning: 8 / 116 --- 2025-03-17T18:45:23.0096146Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Library.fallback in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=376. 2025-03-17T18:45:23.0098468Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0099617Z Registers the function implementation as the fallback for the given key. 2025-03-17T18:45:23.0100450Z 2025-03-17T18:45:23.0101065Z This function only works for a library with global namespace ("_"). 2025-03-17T18:45:23.0101855Z 2025-03-17T18:45:23.0102225Z Args: 2025-03-17T18:45:23.0102999Z fn: function used as fallback for the given dispatch key or :func:`~fallthrough_kernel` 2025-03-17T18:45:23.0103962Z to register a fallthrough. 2025-03-17T18:45:23.0105129Z dispatch_key: dispatch key that the input function should be registered for. By default, it uses 2025-03-17T18:45:23.0106272Z the dispatch key that the library was created with. 2025-03-17T18:45:23.0107596Z with_keyset: flag controlling if the current dispatcher call keyset should be passed as the first argument 2025-03-17T18:45:23.0109139Z to :attr:`fn` when calling. This should be used to create the appropriate keyset for redispatch calls. 2025-03-17T18:45:23.0110123Z 2025-03-17T18:45:23.0110491Z Example:: 2025-03-17T18:45:23.0110979Z >>> my_lib = Library("_", "IMPL") 2025-03-17T18:45:23.0111686Z >>> def fallback_kernel(op, *args, **kwargs): 2025-03-17T18:45:23.0112386Z >>> # Handle all autocast ops generically 2025-03-17T18:45:23.0112998Z >>> # ... 2025-03-17T18:45:23.0113579Z >>> my_lib.fallback(fallback_kernel, "Autocast") 2025-03-17T18:45:23.0114220Z 2025-03-17T18:45:23.0115647Z Original Error: IndentationError('expected an indented block after function definition on line 2', ('', 5, 1, 'my_lib.fallback(fallback_kernel, "Autocast")\n', 5, 7)) 2025-03-17T18:45:23.0117216Z 2025-03-17T18:45:23.0117672Z my_lib.fallback(fallback_kernel, "Autocast") 2025-03-17T18:45:23.0118271Z ^ 2025-03-17T18:45:23.0118661Z warnings.warn(msg) 2025-03-17T18:45:23.0119115Z 2025-03-17T18:45:23.0119663Z --- Parse Warning: 9 / 116 --- 2025-03-17T18:45:23.0121690Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=register_fake in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=920. 2025-03-17T18:45:23.0123968Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0125082Z Register a FakeTensor implementation ("fake impl") for this operator. 2025-03-17T18:45:23.0125885Z 2025-03-17T18:45:23.0126417Z Also sometimes known as a "meta kernel", "abstract impl". 2025-03-17T18:45:23.0127118Z 2025-03-17T18:45:23.0127775Z An "FakeTensor implementation" specifies the behavior of this operator on 2025-03-17T18:45:23.0128961Z Tensors that carry no data ("FakeTensor"). Given some input Tensors with 2025-03-17T18:45:23.0130031Z certain properties (sizes/strides/storage_offset/device), it specifies 2025-03-17T18:45:23.0130972Z what the properties of the output Tensors are. 2025-03-17T18:45:23.0131608Z 2025-03-17T18:45:23.0132245Z The FakeTensor implementation has the same signature as the operator. 2025-03-17T18:45:23.0133304Z It is run for both FakeTensors and meta tensors. To write a FakeTensor 2025-03-17T18:45:23.0134327Z implementation, assume that all Tensor inputs to the operator are 2025-03-17T18:45:23.0135344Z regular CPU/CUDA/Meta tensors, but they do not have storage, and 2025-03-17T18:45:23.0136333Z you are trying to return regular CPU/CUDA/Meta tensor(s) as output. 2025-03-17T18:45:23.0137543Z The FakeTensor implementation must consist of only PyTorch operations 2025-03-17T18:45:23.0138561Z (and may not directly access the storage or data of any input or 2025-03-17T18:45:23.0139346Z intermediate Tensors). 2025-03-17T18:45:23.0139842Z 2025-03-17T18:45:23.0140320Z This API may be used as a decorator (see examples). 2025-03-17T18:45:23.0140970Z 2025-03-17T18:45:23.0141432Z For a detailed guide on custom ops, please see 2025-03-17T18:45:23.0142342Z https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html 2025-03-17T18:45:23.0143137Z 2025-03-17T18:45:23.0143496Z Examples: 2025-03-17T18:45:23.0143924Z >>> import torch 2025-03-17T18:45:23.0144422Z >>> import numpy as np 2025-03-17T18:45:23.0144981Z >>> from torch import Tensor 2025-03-17T18:45:23.0145637Z >>> 2025-03-17T18:45:23.0146221Z >>> # Example 1: an operator without data-dependent output shape 2025-03-17T18:45:23.0147244Z >>> @torch.library.custom_op("mylib::custom_linear", mutates_args=()) 2025-03-17T18:45:23.0148273Z >>> def custom_linear(x: Tensor, weight: Tensor, bias: Tensor) -> Tensor: 2025-03-17T18:45:23.0149245Z >>> raise NotImplementedError("Implementation goes here") 2025-03-17T18:45:23.0149951Z >>> 2025-03-17T18:45:23.0150501Z >>> @torch.library.register_fake("mylib::custom_linear") 2025-03-17T18:45:23.0151213Z >>> def _(x, weight, bias): 2025-03-17T18:45:23.0151852Z >>> assert x.dim() == 2 2025-03-17T18:45:23.0152435Z >>> assert weight.dim() == 2 2025-03-17T18:45:23.0153040Z >>> assert bias.dim() == 1 2025-03-17T18:45:23.0153666Z >>> assert x.shape[1] == weight.shape[1] 2025-03-17T18:45:23.0154344Z >>> assert weight.shape[0] == bias.shape[0] 2025-03-17T18:45:23.0155023Z >>> assert x.device == weight.device 2025-03-17T18:45:23.0155615Z >>> 2025-03-17T18:45:23.0156070Z >>> return (x @ weight.t()) + bias 2025-03-17T18:45:23.0156648Z >>> 2025-03-17T18:45:23.0157209Z >>> with torch._subclasses.fake_tensor.FakeTensorMode(): 2025-03-17T18:45:23.0157923Z >>> x = torch.randn(2, 3) 2025-03-17T18:45:23.0158513Z >>> w = torch.randn(3, 3) 2025-03-17T18:45:23.0159094Z >>> b = torch.randn(3) 2025-03-17T18:45:23.0159718Z >>> y = torch.ops.mylib.custom_linear(x, w, b) 2025-03-17T18:45:23.0160348Z >>> 2025-03-17T18:45:23.0160781Z >>> assert y.shape == (2, 3) 2025-03-17T18:45:23.0161327Z >>> 2025-03-17T18:45:23.0161888Z >>> # Example 2: an operator with data-dependent output shape 2025-03-17T18:45:23.0162836Z >>> @torch.library.custom_op("mylib::custom_nonzero", mutates_args=()) 2025-03-17T18:45:23.0163703Z >>> def custom_nonzero(x: Tensor) -> Tensor: 2025-03-17T18:45:23.0164358Z >>> x_np = x.numpy(force=True) 2025-03-17T18:45:23.0165017Z >>> res = np.stack(np.nonzero(x_np), axis=1) 2025-03-17T18:45:23.0165812Z >>> return torch.tensor(res, device=x.device) 2025-03-17T18:45:23.0166436Z >>> 2025-03-17T18:45:23.0166997Z >>> @torch.library.register_fake("mylib::custom_nonzero") 2025-03-17T18:45:23.0167702Z >>> def _(x): 2025-03-17T18:45:23.0168270Z >>> # Number of nonzero-elements is data-dependent. 2025-03-17T18:45:23.0169040Z >>> # Since we cannot peek at the data in an fake impl, 2025-03-17T18:45:23.0169823Z >>> # we use the ctx object to construct a new symint that 2025-03-17T18:45:23.0170573Z >>> # represents the data-dependent size. 2025-03-17T18:45:23.0171239Z >>> ctx = torch.library.get_ctx() 2025-03-17T18:45:23.0171858Z >>> nnz = ctx.new_dynamic_size() 2025-03-17T18:45:23.0172476Z >>> shape = [nnz, x.dim()] 2025-03-17T18:45:23.0173140Z >>> result = x.new_empty(shape, dtype=torch.int64) 2025-03-17T18:45:23.0173797Z >>> return result 2025-03-17T18:45:23.0174294Z >>> 2025-03-17T18:45:23.0174865Z >>> from torch.fx.experimental.proxy_tensor import make_fx 2025-03-17T18:45:23.0175567Z >>> 2025-03-17T18:45:23.0176013Z >>> x = torch.tensor([0, 1, 2, 3, 4, 0]) 2025-03-17T18:45:23.0176881Z >>> trace = make_fx(torch.ops.mylib.custom_nonzero, tracing_mode="symbolic")(x) 2025-03-17T18:45:23.0177765Z >>> trace.print_readable() 2025-03-17T18:45:23.0178310Z >>> 2025-03-17T18:45:23.0178959Z >>> assert torch.allclose(trace(x), torch.ops.mylib.custom_nonzero(x)) 2025-03-17T18:45:23.0179750Z 2025-03-17T18:45:23.0180144Z 2025-03-17T18:45:23.0181366Z Original Error: IndentationError('expected an indented block after function definition on line 37', ('', 38, 1, '_._ = None\n', 38, 2)) 2025-03-17T18:45:23.0182740Z 2025-03-17T18:45:23.0183100Z _._ = None 2025-03-17T18:45:23.0183488Z ^ 2025-03-17T18:45:23.0183865Z warnings.warn(msg) 2025-03-17T18:45:23.0184330Z 2025-03-17T18:45:23.0184908Z --- Parse Warning: 10 / 116 --- 2025-03-17T18:45:23.0187077Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=register_autograd in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=1041. 2025-03-17T18:45:23.0189480Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0190463Z Register a backward formula for this custom op. 2025-03-17T18:45:23.0191098Z 2025-03-17T18:45:23.0191685Z In order for an operator to work with autograd, you need to register 2025-03-17T18:45:23.0192484Z a backward formula: 2025-03-17T18:45:23.0193188Z 1. You must tell us how to compute gradients during the backward pass 2025-03-17T18:45:23.0194017Z by providing us a "backward" function. 2025-03-17T18:45:23.0194838Z 2. If you need any values from the forward to compute gradients, you can 2025-03-17T18:45:23.0195728Z use `setup_context` to save values for backward. 2025-03-17T18:45:23.0196358Z 2025-03-17T18:45:23.0196981Z ``backward`` runs during the backward pass. It accepts ``(ctx, *grads)``: 2025-03-17T18:45:23.0197981Z - ``grads`` is one or more gradients. The number of gradients matches 2025-03-17T18:45:23.0198790Z the number of outputs of the operator. 2025-03-17T18:45:23.0199623Z The ``ctx`` object is `the same ctx object `_ used by 2025-03-17T18:45:23.0200705Z :class:`torch.autograd.Function`. The semantics of ``backward_fn`` are the 2025-03-17T18:45:23.0201670Z same as :meth:`torch.autograd.Function.backward`. 2025-03-17T18:45:23.0202324Z 2025-03-17T18:45:23.0202923Z ``setup_context(ctx, inputs, output)`` runs during the forward pass. 2025-03-17T18:45:23.0203968Z Please save quantities needed for backward onto the ``ctx`` object via 2025-03-17T18:45:23.0205113Z either :meth:`torch.autograd.function.FunctionCtx.save_for_backward` 2025-03-17T18:45:23.0206140Z or assigning them as attributes of ``ctx``. If your custom op has 2025-03-17T18:45:23.0207124Z kwarg-only arguments, we expect the signature of ``setup_context`` 2025-03-17T18:45:23.0208103Z to be ``setup_context(ctx, inputs, keyword_only_inputs, output)``. 2025-03-17T18:45:23.0208840Z 2025-03-17T18:45:23.0209440Z Both ``setup_context_fn`` and ``backward_fn`` must be traceable. That is, 2025-03-17T18:45:23.0210481Z they may not directly access :meth:`torch.Tensor.data_ptr` and they must 2025-03-17T18:45:23.0211558Z not depend on or mutate global state. If you need a non-traceable backward, 2025-03-17T18:45:23.0212634Z you can make it a separate custom_op that you call inside ``backward_fn``. 2025-03-17T18:45:23.0213429Z 2025-03-17T18:45:23.0214038Z If you need different autograd behavior on different devices, then we 2025-03-17T18:45:23.0215103Z recommend creating two different custom operators, one for each device 2025-03-17T18:45:23.0216184Z that needs different behavior, and switching between them at runtime. 2025-03-17T18:45:23.0216989Z 2025-03-17T18:45:23.0217356Z Examples: 2025-03-17T18:45:23.0217789Z >>> import torch 2025-03-17T18:45:23.0218300Z >>> import numpy as np 2025-03-17T18:45:23.0218855Z >>> from torch import Tensor 2025-03-17T18:45:23.0219402Z >>> 2025-03-17T18:45:23.0220021Z >>> @torch.library.custom_op("mylib::numpy_sin", mutates_args=()) 2025-03-17T18:45:23.0220868Z >>> def numpy_sin(x: Tensor) -> Tensor: 2025-03-17T18:45:23.0221475Z >>> x_np = x.cpu().numpy() 2025-03-17T18:45:23.0222046Z >>> y_np = np.sin(x_np) 2025-03-17T18:45:23.0222712Z >>> return torch.from_numpy(y_np).to(device=x.device) 2025-03-17T18:45:23.0223386Z >>> 2025-03-17T18:45:23.0223910Z >>> def setup_context(ctx, inputs, output) -> Tensor: 2025-03-17T18:45:23.0224580Z >>> x, = inputs 2025-03-17T18:45:23.0225102Z >>> ctx.save_for_backward(x) 2025-03-17T18:45:23.0225664Z >>> 2025-03-17T18:45:23.0226086Z >>> def backward(ctx, grad): 2025-03-17T18:45:23.0226794Z >>> x, = ctx.saved_tensors 2025-03-17T18:45:23.0227376Z >>> return grad * x.cos() 2025-03-17T18:45:23.0227924Z >>> 2025-03-17T18:45:23.0228376Z >>> torch.library.register_autograd( 2025-03-17T18:45:23.0229131Z ... "mylib::numpy_sin", backward, setup_context=setup_context 2025-03-17T18:45:23.0229840Z ... ) 2025-03-17T18:45:23.0230238Z >>> 2025-03-17T18:45:23.0230699Z >>> x = torch.randn(3, requires_grad=True) 2025-03-17T18:45:23.0231319Z >>> y = numpy_sin(x) 2025-03-17T18:45:23.0231967Z >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) 2025-03-17T18:45:23.0232721Z >>> assert torch.allclose(grad_x, x.cos()) 2025-03-17T18:45:23.0233320Z >>> 2025-03-17T18:45:23.0233764Z >>> # Example with a keyword-only arg 2025-03-17T18:45:23.0234568Z >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) 2025-03-17T18:45:23.0235439Z >>> def numpy_mul(x: Tensor, *, val: float) -> Tensor: 2025-03-17T18:45:23.0236119Z >>> x_np = x.cpu().numpy() 2025-03-17T18:45:23.0236691Z >>> y_np = x_np * val 2025-03-17T18:45:23.0237482Z >>> return torch.from_numpy(y_np).to(device=x.device) 2025-03-17T18:45:23.0238154Z >>> 2025-03-17T18:45:23.0238805Z >>> def setup_context(ctx, inputs, keyword_only_inputs, output) -> Tensor: 2025-03-17T18:45:23.0239674Z >>> ctx.val = keyword_only_inputs["val"] 2025-03-17T18:45:23.0240262Z >>> 2025-03-17T18:45:23.0240693Z >>> def backward(ctx, grad): 2025-03-17T18:45:23.0241387Z >>> return grad * ctx.val 2025-03-17T18:45:23.0241934Z >>> 2025-03-17T18:45:23.0242392Z >>> torch.library.register_autograd( 2025-03-17T18:45:23.0243149Z ... "mylib::numpy_mul", backward, setup_context=setup_context 2025-03-17T18:45:23.0243856Z ... ) 2025-03-17T18:45:23.0244251Z >>> 2025-03-17T18:45:23.0244696Z >>> x = torch.randn(3, requires_grad=True) 2025-03-17T18:45:23.0245321Z >>> y = numpy_mul(x, val=3.14) 2025-03-17T18:45:23.0246016Z >>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y)) 2025-03-17T18:45:23.0246877Z >>> assert torch.allclose(grad_x, torch.full_like(x, 3.14)) 2025-03-17T18:45:23.0247572Z 2025-03-17T18:45:23.0247930Z 2025-03-17T18:45:23.0248606Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0249448Z 2025-03-17T18:45:23.0249823Z warnings.warn(msg) 2025-03-17T18:45:23.0250269Z 2025-03-17T18:45:23.0250845Z --- Parse Warning: 11 / 116 --- 2025-03-17T18:45:23.0252849Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=opcheck in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py line=1455. 2025-03-17T18:45:23.0255121Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0256237Z Given an operator and some sample arguments, tests if the operator is 2025-03-17T18:45:23.0257044Z registered correctly. 2025-03-17T18:45:23.0257585Z 2025-03-17T18:45:23.0258173Z That is, when you use the torch.library/TORCH_LIBRARY APIs to create a 2025-03-17T18:45:23.0259236Z custom op, you specified metadata (e.g. mutability info) about the custom op 2025-03-17T18:45:23.0260337Z and these APIs require that the functions you pass them satisfy certain 2025-03-17T18:45:23.0261414Z properties (e.g. no data pointer access in the fake/meta/abstract kernel) 2025-03-17T18:45:23.0262328Z ``opcheck`` tests these metadata and properties. 2025-03-17T18:45:23.0262942Z 2025-03-17T18:45:23.0263344Z Concretely, we test the following: 2025-03-17T18:45:23.0263904Z 2025-03-17T18:45:23.0264467Z - test_schema: If the schema matches the implementation of 2025-03-17T18:45:23.0265420Z the operator. For example: if the schema specifies a Tensor is mutated, 2025-03-17T18:45:23.0266498Z then we check the implementation mutates the Tensor. If the schema 2025-03-17T18:45:23.0267463Z specifies that we return a new Tensor, then we check that the 2025-03-17T18:45:23.0268453Z implementation returns a new Tensor (instead of an existing one or 2025-03-17T18:45:23.0269278Z a view of an existing one). 2025-03-17T18:45:23.0270032Z - test_autograd_registration: If the operator supports training 2025-03-17T18:45:23.0270985Z (autograd): we check that its autograd formula is registered via 2025-03-17T18:45:23.0271956Z torch.library.register_autograd or a manual registration to one 2025-03-17T18:45:23.0272940Z or more DispatchKey::Autograd keys. Any other DispatchKey-based 2025-03-17T18:45:23.0273785Z registrations may lead to undefined behavior. 2025-03-17T18:45:23.0274585Z - test_faketensor: If the operator has a FakeTensor kernel 2025-03-17T18:45:23.0275448Z (and if it is correct). The FakeTensor kernel is necessary ( 2025-03-17T18:45:23.0276385Z but not sufficient) for the operator to work with PyTorch compilation 2025-03-17T18:45:23.0277405Z APIs (torch.compile/export/FX). We check that a FakeTensor kernel 2025-03-17T18:45:23.0278372Z (also sometimes known as a meta kernel) was registered for the 2025-03-17T18:45:23.0279306Z operator and that it is correct. This test takes the result of 2025-03-17T18:45:23.0280336Z running the operator on real tensors and the result of running 2025-03-17T18:45:23.0281282Z the operator on FakeTensors and checks that they have the same 2025-03-17T18:45:23.0282143Z Tensor metadata (sizes/strides/dtype/device/etc). 2025-03-17T18:45:23.0283005Z - test_aot_dispatch_dynamic: If the operator has correct behavior 2025-03-17T18:45:23.0283927Z with PyTorch compilation APIs (torch.compile/export/FX). 2025-03-17T18:45:23.0284864Z This checks that the outputs (and gradients, if applicable) are the 2025-03-17T18:45:23.0285743Z same under eager-mode PyTorch and torch.compile. 2025-03-17T18:45:23.0286588Z This test is a superset of ``test_faketensor`` and is an e2e test; 2025-03-17T18:45:23.0287466Z other things it tests are that the operator supports 2025-03-17T18:45:23.0288381Z functionalization and that the backward pass (if it exists) also 2025-03-17T18:45:23.0289254Z supports FakeTensor and functionalization. 2025-03-17T18:45:23.0289867Z 2025-03-17T18:45:23.0290436Z For best results, please call ``opcheck`` multiple times with a 2025-03-17T18:45:23.0291348Z representative set of inputs. If your operator supports 2025-03-17T18:45:23.0292331Z autograd, please use ``opcheck`` with inputs with ``requires_grad = True``; 2025-03-17T18:45:23.0293409Z if your operator supports multiple devices (e.g. CPU and CUDA), please 2025-03-17T18:45:23.0294336Z use ``opcheck`` with inputs on all supported devices. 2025-03-17T18:45:23.0294989Z 2025-03-17T18:45:23.0295348Z Args: 2025-03-17T18:45:23.0295939Z op: The operator. Must either be a function decorated with 2025-03-17T18:45:23.0296874Z :func:`torch.library.custom_op` or an OpOverload/OpOverloadPacket 2025-03-17T18:45:23.0297877Z found in torch.ops.* (e.g. torch.ops.aten.sin, torch.ops.mylib.foo) 2025-03-17T18:45:23.0298695Z args: The args to the operator 2025-03-17T18:45:23.0299319Z kwargs: The kwargs to the operator 2025-03-17T18:45:23.0300069Z test_utils: Tests that we should run. Default: all of them. 2025-03-17T18:45:23.0300873Z Example: ("test_schema", "test_faketensor") 2025-03-17T18:45:23.0301691Z raise_exception: If we should raise an exception on the first 2025-03-17T18:45:23.0302609Z error. If False, we will return a dict with information 2025-03-17T18:45:23.0303345Z on if each test passed or not. 2025-03-17T18:45:23.0304192Z rtol (Optional[float]): Relative tolerance for floating point comparisons. 2025-03-17T18:45:23.0305121Z If specified ``atol`` must also be specified. 2025-03-17T18:45:23.0305942Z If omitted, default values based on the ``dtype`` are selected 2025-03-17T18:45:23.0306905Z (see the table in :func:`torch.testing.assert_close`). 2025-03-17T18:45:23.0307875Z atol (Optional[float]): Absolute tolerance for floating point comparisons. 2025-03-17T18:45:23.0308796Z If specified ``rtol`` must also be specified. 2025-03-17T18:45:23.0309610Z If omitted, default values based on the ``dtype`` are selected 2025-03-17T18:45:23.0310481Z (see the table in :func:`torch.testing.assert_close`). 2025-03-17T18:45:23.0311150Z 2025-03-17T18:45:23.0311513Z .. warning:: 2025-03-17T18:45:23.0311929Z 2025-03-17T18:45:23.0312533Z opcheck and :func:`torch.autograd.gradcheck` test different things; 2025-03-17T18:45:23.0313542Z opcheck tests if your usage of torch.library APIs is correct while 2025-03-17T18:45:23.0314534Z :func:`torch.autograd.gradcheck` tests if your autograd formula is 2025-03-17T18:45:23.0314937Z mathematically correct. Use both to test custom ops that support 2025-03-17T18:45:23.0315132Z gradient computation. 2025-03-17T18:45:23.0315279Z 2025-03-17T18:45:23.0315437Z Example: 2025-03-17T18:45:23.0315659Z 2025-03-17T18:45:23.0315916Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.0316283Z >>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=()) 2025-03-17T18:45:23.0316544Z >>> def numpy_mul(x: Tensor, y: float) -> Tensor: 2025-03-17T18:45:23.0316746Z >>> x_np = x.numpy(force=True) 2025-03-17T18:45:23.0316931Z >>> z_np = x_np * y 2025-03-17T18:45:23.0317174Z >>> return torch.from_numpy(z_np).to(x.device) 2025-03-17T18:45:23.0317338Z >>> 2025-03-17T18:45:23.0317532Z >>> @numpy_mul.register_fake 2025-03-17T18:45:23.0317713Z >>> def _(x, y): 2025-03-17T18:45:23.0317912Z >>> return torch.empty_like(x) 2025-03-17T18:45:23.0318076Z >>> 2025-03-17T18:45:23.0318302Z >>> def setup_context(ctx, inputs, output): 2025-03-17T18:45:23.0318476Z >>> y, = inputs 2025-03-17T18:45:23.0318640Z >>> ctx.y = y 2025-03-17T18:45:23.0318799Z >>> 2025-03-17T18:45:23.0318993Z >>> def backward(ctx, grad): 2025-03-17T18:45:23.0319196Z >>> return grad * ctx.y, None 2025-03-17T18:45:23.0319351Z >>> 2025-03-17T18:45:23.0319765Z >>> numpy_mul.register_autograd(backward, setup_context=setup_context) 2025-03-17T18:45:23.0319918Z >>> 2025-03-17T18:45:23.0320111Z >>> sample_inputs = [ 2025-03-17T18:45:23.0320298Z >>> (torch.randn(3), 3.14), 2025-03-17T18:45:23.0320535Z >>> (torch.randn(2, 3, device='cuda'), 2.718), 2025-03-17T18:45:23.0320829Z >>> (torch.randn(1, 10, requires_grad=True), 1.234), 2025-03-17T18:45:23.0321175Z >>> (torch.randn(64, 64, device='cuda', requires_grad=True), 90.18), 2025-03-17T18:45:23.0321325Z >>> ] 2025-03-17T18:45:23.0321487Z >>> 2025-03-17T18:45:23.0321682Z >>> for args in sample_inputs: 2025-03-17T18:45:23.0321927Z >>> torch.library.opcheck(numpy_mul, args) 2025-03-17T18:45:23.0322082Z 2025-03-17T18:45:23.0322229Z 2025-03-17T18:45:23.0322707Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0322858Z 2025-03-17T18:45:23.0323076Z warnings.warn(msg) 2025-03-17T18:45:23.0323225Z 2025-03-17T18:45:23.0323590Z --- Parse Warning: 12 / 116 --- 2025-03-17T18:45:23.0325179Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/serialization.py line=1283. 2025-03-17T18:45:23.0325680Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0326289Z load(f, map_location=None, pickle_module=pickle, *, weights_only=True, mmap=None, **pickle_load_args) 2025-03-17T18:45:23.0326442Z 2025-03-17T18:45:23.0326768Z Loads an object saved with :func:`torch.save` from a file. 2025-03-17T18:45:23.0326931Z 2025-03-17T18:45:23.0327354Z :func:`torch.load` uses Python's unpickling facilities but treats storages, 2025-03-17T18:45:23.0327790Z which underlie tensors, specially. They are first deserialized on the 2025-03-17T18:45:23.0328184Z CPU and are then moved to the device they were saved from. If this fails 2025-03-17T18:45:23.0328625Z (e.g. because the run time system doesn't have certain devices), an exception 2025-03-17T18:45:23.0329058Z is raised. However, storages can be dynamically remapped to an alternative 2025-03-17T18:45:23.0329369Z set of devices using the :attr:`map_location` argument. 2025-03-17T18:45:23.0329517Z 2025-03-17T18:45:23.0329975Z If :attr:`map_location` is a callable, it will be called once for each serialized 2025-03-17T18:45:23.0330384Z storage with two arguments: storage and location. The storage argument 2025-03-17T18:45:23.0330893Z will be the initial deserialization of the storage, residing on the CPU. 2025-03-17T18:45:23.0331288Z Each serialized storage has a location tag associated with it which 2025-03-17T18:45:23.0331687Z identifies the device it was saved from, and this tag is the second 2025-03-17T18:45:23.0332157Z argument passed to :attr:`map_location`. The builtin location tags are ``'cpu'`` 2025-03-17T18:45:23.0332575Z for CPU tensors and ``'cuda:device_id'`` (e.g. ``'cuda:2'``) for CUDA tensors. 2025-03-17T18:45:23.0332947Z :attr:`map_location` should return either ``None`` or a storage. If 2025-03-17T18:45:23.0333428Z :attr:`map_location` returns a storage, it will be used as the final deserialized 2025-03-17T18:45:23.0333881Z object, already moved to the right device. Otherwise, :func:`torch.load` will 2025-03-17T18:45:23.0334338Z fall back to the default behavior, as if :attr:`map_location` wasn't specified. 2025-03-17T18:45:23.0334483Z 2025-03-17T18:45:23.0334924Z If :attr:`map_location` is a :class:`torch.device` object or a string containing 2025-03-17T18:45:23.0335349Z a device tag, it indicates the location where all tensors should be loaded. 2025-03-17T18:45:23.0335506Z 2025-03-17T18:45:23.0335977Z Otherwise, if :attr:`map_location` is a dict, it will be used to remap location tags 2025-03-17T18:45:23.0336365Z appearing in the file (keys), to ones that specify where to put the 2025-03-17T18:45:23.0336542Z storages (values). 2025-03-17T18:45:23.0336698Z 2025-03-17T18:45:23.0337243Z User extensions can register their own location tags and tagging and 2025-03-17T18:45:23.0337810Z deserialization methods using :func:`torch.serialization.register_package`. 2025-03-17T18:45:23.0337955Z 2025-03-17T18:45:23.0338115Z Args: 2025-03-17T18:45:23.0338704Z f: a file-like object (has to implement :meth:`read`, :meth:`readline`, :meth:`tell`, and :meth:`seek`), 2025-03-17T18:45:23.0339021Z or a string or os.PathLike object containing a file name 2025-03-17T18:45:23.0339616Z map_location: a function, :class:`torch.device`, string or a dict specifying how to remap storage 2025-03-17T18:45:23.0339785Z locations 2025-03-17T18:45:23.0340248Z pickle_module: module used for unpickling metadata and objects (has to 2025-03-17T18:45:23.0340569Z match the :attr:`pickle_module` used to serialize file) 2025-03-17T18:45:23.0340956Z weights_only: Indicates whether unpickler should be restricted to 2025-03-17T18:45:23.0341257Z loading only tensors, primitive types, dictionaries 2025-03-17T18:45:23.0341650Z and any types added via :func:`torch.serialization.add_safe_globals`. 2025-03-17T18:45:23.0341894Z See :ref:`weights-only` for more details. 2025-03-17T18:45:23.0342525Z mmap: Indicates whether the file should be mmaped rather than loading all the storages into memory. 2025-03-17T18:45:23.0343164Z Typically, tensor storages in the file will first be moved from disk to CPU memory, after which they 2025-03-17T18:45:23.0343803Z are moved to the location that they were tagged with when saving, or specified by ``map_location``. This 2025-03-17T18:45:23.0344421Z second step is a no-op if the final location is CPU. When the ``mmap`` flag is set, instead of copying the 2025-03-17T18:45:23.0344864Z tensor storages from disk to CPU memory in the first step, ``f`` is mmaped. 2025-03-17T18:45:23.0345320Z pickle_load_args: (Python 3 only) optional keyword arguments passed over to 2025-03-17T18:45:23.0345729Z :func:`pickle_module.load` and :func:`pickle_module.Unpickler`, e.g., 2025-03-17T18:45:23.0345919Z :attr:`errors=...`. 2025-03-17T18:45:23.0346065Z 2025-03-17T18:45:23.0346244Z .. warning:: 2025-03-17T18:45:23.0346794Z :func:`torch.load()` unless `weights_only` parameter is set to `True`, 2025-03-17T18:45:23.0347167Z uses ``pickle`` module implicitly, which is known to be insecure. 2025-03-17T18:45:23.0347675Z It is possible to construct malicious pickle data which will execute arbitrary code 2025-03-17T18:45:23.0348127Z during unpickling. Never load data that could have come from an untrusted 2025-03-17T18:45:23.0348671Z source in an unsafe mode, or that could have been tampered with. **Only load data you trust**. 2025-03-17T18:45:23.0348827Z 2025-03-17T18:45:23.0348991Z .. note:: 2025-03-17T18:45:23.0349478Z When you call :func:`torch.load()` on a file which contains GPU tensors, those tensors 2025-03-17T18:45:23.0349949Z will be loaded to GPU by default. You can call ``torch.load(.., map_location='cpu')`` 2025-03-17T18:45:23.0350457Z and then :meth:`load_state_dict` to avoid GPU RAM surge when loading a model checkpoint. 2025-03-17T18:45:23.0350609Z 2025-03-17T18:45:23.0350773Z .. note:: 2025-03-17T18:45:23.0351230Z By default, we decode byte strings as ``utf-8``. This is to avoid a common error 2025-03-17T18:45:23.0351637Z case ``UnicodeDecodeError: 'ascii' codec can't decode byte 0x...`` 2025-03-17T18:45:23.0352021Z when loading files saved by Python 2 in Python 3. If this default 2025-03-17T18:45:23.0352502Z is incorrect, you may use an extra :attr:`encoding` keyword argument to specify how 2025-03-17T18:45:23.0352935Z these objects should be loaded, e.g., :attr:`encoding='latin1'` decodes them 2025-03-17T18:45:23.0353422Z to strings using ``latin1`` encoding, and :attr:`encoding='bytes'` keeps them 2025-03-17T18:45:23.0353854Z as byte arrays which can be decoded later with ``byte_array.decode(...)``. 2025-03-17T18:45:23.0354020Z 2025-03-17T18:45:23.0354174Z Example: 2025-03-17T18:45:23.0354421Z >>> # xdoctest: +SKIP("undefined filepaths") 2025-03-17T18:45:23.0354673Z >>> torch.load("tensors.pt", weights_only=True) 2025-03-17T18:45:23.0354894Z # Load all tensors onto the CPU 2025-03-17T18:45:23.0355064Z >>> torch.load( 2025-03-17T18:45:23.0355251Z ... "tensors.pt", 2025-03-17T18:45:23.0355510Z ... map_location=torch.device("cpu"), 2025-03-17T18:45:23.0355710Z ... weights_only=True, 2025-03-17T18:45:23.0355863Z ... ) 2025-03-17T18:45:23.0356129Z # Load all tensors onto the CPU, using a function 2025-03-17T18:45:23.0356296Z >>> torch.load( 2025-03-17T18:45:23.0356478Z ... "tensors.pt", 2025-03-17T18:45:23.0356725Z ... map_location=lambda storage, loc: storage, 2025-03-17T18:45:23.0356921Z ... weights_only=True, 2025-03-17T18:45:23.0357075Z ... ) 2025-03-17T18:45:23.0357291Z # Load all tensors onto GPU 1 2025-03-17T18:45:23.0357462Z >>> torch.load( 2025-03-17T18:45:23.0357648Z ... "tensors.pt", 2025-03-17T18:45:23.0357936Z ... map_location=lambda storage, loc: storage.cuda(1), 2025-03-17T18:45:23.0358138Z ... weights_only=True, 2025-03-17T18:45:23.0358345Z ... ) # type: ignore[attr-defined] 2025-03-17T18:45:23.0358562Z # Map tensors from GPU 1 to GPU 0 2025-03-17T18:45:23.0358730Z >>> torch.load( 2025-03-17T18:45:23.0358916Z ... "tensors.pt", 2025-03-17T18:45:23.0359133Z ... map_location={"cuda:1": "cuda:0"}, 2025-03-17T18:45:23.0359337Z ... weights_only=True, 2025-03-17T18:45:23.0359489Z ... ) 2025-03-17T18:45:23.0359714Z # Load tensor from io.BytesIO object 2025-03-17T18:45:23.0360176Z # Loading from a buffer setting weights_only=False, warning this can be unsafe 2025-03-17T18:45:23.0360401Z >>> with open("tensor.pt", "rb") as f: 2025-03-17T18:45:23.0360687Z ... buffer = io.BytesIO(f.read()) 2025-03-17T18:45:23.0360933Z >>> torch.load(buffer, weights_only=False) 2025-03-17T18:45:23.0361202Z # Load a module with 'ascii' encoding for unpickling 2025-03-17T18:45:23.0361678Z # Loading from a module setting weights_only=False, warning this can be unsafe 2025-03-17T18:45:23.0362038Z >>> torch.load("module.pt", encoding="ascii", weights_only=False) 2025-03-17T18:45:23.0362205Z 2025-03-17T18:45:23.0362674Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0362833Z 2025-03-17T18:45:23.0363008Z warnings.warn(msg) 2025-03-17T18:45:23.0363161Z 2025-03-17T18:45:23.0363531Z --- Parse Warning: 13 / 116 --- 2025-03-17T18:45:23.0365220Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=is_available in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py line=38. 2025-03-17T18:45:23.0365685Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0366169Z Check if the current accelerator is available at runtime: it was build, all the 2025-03-17T18:45:23.0366565Z required drivers are available and at least one device is visible. 2025-03-17T18:45:23.0366841Z See :ref:`accelerator` for details. 2025-03-17T18:45:23.0366989Z 2025-03-17T18:45:23.0367144Z Returns: 2025-03-17T18:45:23.0367660Z bool: A boolean indicating if there is an available :ref:`accelerator`. 2025-03-17T18:45:23.0367839Z 2025-03-17T18:45:23.0368011Z Example:: 2025-03-17T18:45:23.0368157Z 2025-03-17T18:45:23.0368658Z >>> assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:23.0368809Z 2025-03-17T18:45:23.0369870Z Original Error: SyntaxError('invalid syntax', ('', 1, 41, 'assert torch.accelerator.is_available() "No available accelerators detected."\n', 1, 78)) 2025-03-17T18:45:23.0370020Z 2025-03-17T18:45:23.0370517Z assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:23.0370724Z ^ 2025-03-17T18:45:23.0370916Z warnings.warn(msg) 2025-03-17T18:45:23.0371061Z 2025-03-17T18:45:23.0371397Z --- Parse Warning: 14 / 116 --- 2025-03-17T18:45:23.0373094Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=synchronize in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/accelerator/__init__.py line=153. 2025-03-17T18:45:23.0373565Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0373945Z Wait for all kernels in all streams on the given device to complete. 2025-03-17T18:45:23.0374106Z 2025-03-17T18:45:23.0374266Z Args: 2025-03-17T18:45:23.0374859Z device (:class:`torch.device`, str, int, optional): device for which to synchronize. It must match 2025-03-17T18:45:23.0375281Z the current :ref:`accelerator` device type. If not given, 2025-03-17T18:45:23.0375647Z use :func:`torch.accelerator.current_device_index` by default. 2025-03-17T18:45:23.0375794Z 2025-03-17T18:45:23.0376380Z .. note:: This function is a no-op if the current :ref:`accelerator` is not initialized. 2025-03-17T18:45:23.0376528Z 2025-03-17T18:45:23.0376699Z Example:: 2025-03-17T18:45:23.0376848Z 2025-03-17T18:45:23.0377110Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.0377603Z >>> assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:23.0377872Z >>> start_event = torch.Event(enable_timing=True) 2025-03-17T18:45:23.0378175Z >>> end_event = torch.Event(enable_timing=True) 2025-03-17T18:45:23.0378376Z >>> start_event.record() 2025-03-17T18:45:23.0378818Z >>> tensor = torch.randn(100, device=torch.accelerator.current_accelerator()) 2025-03-17T18:45:23.0379014Z >>> sum = torch.sum(tensor) 2025-03-17T18:45:23.0379198Z >>> end_event.record() 2025-03-17T18:45:23.0379438Z >>> torch.accelerator.synchronize() 2025-03-17T18:45:23.0379741Z >>> elapsed_time_ms = start_event.elapsed_time(end_event) 2025-03-17T18:45:23.0379904Z 2025-03-17T18:45:23.0380954Z Original Error: SyntaxError('invalid syntax', ('', 2, 41, 'assert torch.accelerator.is_available() "No available accelerators detected."\n', 2, 78)) 2025-03-17T18:45:23.0381126Z 2025-03-17T18:45:23.0381605Z assert torch.accelerator.is_available() "No available accelerators detected." 2025-03-17T18:45:23.0381795Z ^ 2025-03-17T18:45:23.0381979Z warnings.warn(msg) 2025-03-17T18:45:23.0382144Z 2025-03-17T18:45:23.0382469Z --- Parse Warning: 15 / 116 --- 2025-03-17T18:45:23.0384058Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=cudart in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/cuda/__init__.py line=396. 2025-03-17T18:45:23.0384512Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0384742Z Retrieves the CUDA runtime API module. 2025-03-17T18:45:23.0384894Z 2025-03-17T18:45:23.0385095Z 2025-03-17T18:45:23.0385565Z This function initializes the CUDA runtime environment if it is not already 2025-03-17T18:45:23.0386004Z initialized and returns the CUDA runtime API module (_cudart). The CUDA 2025-03-17T18:45:23.0386416Z runtime API module provides access to various CUDA runtime functions. 2025-03-17T18:45:23.0386658Z 2025-03-17T18:45:23.0386818Z Args: 2025-03-17T18:45:23.0386993Z ``None`` 2025-03-17T18:45:23.0387145Z 2025-03-17T18:45:23.0387320Z Returns: 2025-03-17T18:45:23.0387580Z module: The CUDA runtime API module (_cudart). 2025-03-17T18:45:23.0387750Z 2025-03-17T18:45:23.0387940Z Raises: 2025-03-17T18:45:23.0388380Z RuntimeError: If CUDA cannot be re-initialized in a forked subprocess. 2025-03-17T18:45:23.0389056Z AssertionError: If PyTorch is not compiled with CUDA support or if libcudart functions are unavailable. 2025-03-17T18:45:23.0389215Z 2025-03-17T18:45:23.0389454Z Example of CUDA operations with profiling: 2025-03-17T18:45:23.0389639Z >>> import torch 2025-03-17T18:45:23.0389882Z >>> from torch.cuda import cudart, check_error 2025-03-17T18:45:23.0390062Z >>> import os 2025-03-17T18:45:23.0390209Z >>> 2025-03-17T18:45:23.0390422Z >>> os.environ['CUDA_PROFILE'] = '1' 2025-03-17T18:45:23.0390592Z >>> 2025-03-17T18:45:23.0390837Z >>> def perform_cuda_operations_with_streams(): 2025-03-17T18:45:23.0391052Z >>> stream = torch.cuda.Stream() 2025-03-17T18:45:23.0391269Z >>> with torch.cuda.stream(stream): 2025-03-17T18:45:23.0391511Z >>> x = torch.randn(100, 100, device='cuda') 2025-03-17T18:45:23.0391729Z >>> y = torch.randn(100, 100, device='cuda') 2025-03-17T18:45:23.0391930Z >>> z = torch.mul(x, y) 2025-03-17T18:45:23.0392098Z >>> return z 2025-03-17T18:45:23.0392260Z >>> 2025-03-17T18:45:23.0392461Z >>> torch.cuda.synchronize() 2025-03-17T18:45:23.0392702Z >>> print("====== Start nsys profiling ======") 2025-03-17T18:45:23.0392944Z >>> check_error(cudart().cudaProfilerStart()) 2025-03-17T18:45:23.0393203Z >>> with torch.autograd.profiler.emit_nvtx(): 2025-03-17T18:45:23.0393535Z >>> result = perform_cuda_operations_with_streams() 2025-03-17T18:45:23.0393777Z >>> print("CUDA operations completed.") 2025-03-17T18:45:23.0394066Z >>> check_error(torch.cuda.cudart().cudaProfilerStop()) 2025-03-17T18:45:23.0394300Z >>> print("====== End nsys profiling ======") 2025-03-17T18:45:23.0394450Z 2025-03-17T18:45:23.0394821Z To run this example and save the profiling information, execute: 2025-03-17T18:45:23.0395477Z >>> $ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py 2025-03-17T18:45:23.0395644Z 2025-03-17T18:45:23.0396096Z This command profiles the CUDA operations in the provided script and saves 2025-03-17T18:45:23.0396455Z the profiling information to a file named `trace_name.prof`. 2025-03-17T18:45:23.0396880Z The `--profile-from-start off` option ensures that profiling starts only 2025-03-17T18:45:23.0397166Z after the `cudaProfilerStart` call in the script. 2025-03-17T18:45:23.0397568Z The `--csv` and `--print-summary` options format the profiling output as a 2025-03-17T18:45:23.0397813Z CSV file and print a summary, respectively. 2025-03-17T18:45:23.0398260Z The `-o` option specifies the output file name, and the `-f` option forces the 2025-03-17T18:45:23.0398549Z overwrite of the output file if it already exists. 2025-03-17T18:45:23.0398702Z 2025-03-17T18:45:23.0399921Z Original Error: SyntaxError('invalid syntax', ('', 1, 1, '$ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py\n', 1, 2)) 2025-03-17T18:45:23.0400103Z 2025-03-17T18:45:23.0400760Z $ nvprof --profile-from-start off --csv --print-summary -o trace_name.prof -f -- python cudart_test.py 2025-03-17T18:45:23.0400911Z ^ 2025-03-17T18:45:23.0401095Z warnings.warn(msg) 2025-03-17T18:45:23.0401240Z 2025-03-17T18:45:23.0401595Z --- Parse Warning: 16 / 116 --- 2025-03-17T18:45:23.0403241Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Future.then in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py line=105. 2025-03-17T18:45:23.0403780Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0403925Z 2025-03-17T18:45:23.0404362Z Append the given callback function to this ``Future``, which will be run 2025-03-17T18:45:23.0404738Z when the ``Future`` is completed. Multiple callbacks can be added to 2025-03-17T18:45:23.0405132Z the same ``Future``, but the order in which they will be executed cannot 2025-03-17T18:45:23.0405464Z be guaranteed (to enforce a certain order consider chaining: 2025-03-17T18:45:23.0405846Z ``fut.then(cb1).then(cb2)``). The callback must take one argument, which 2025-03-17T18:45:23.0406234Z is the reference to this ``Future``. The callback function can use the 2025-03-17T18:45:23.0406619Z :meth:`value` method to get the value. Note that if this ``Future`` is 2025-03-17T18:45:23.0407033Z already completed, the given callback will be run immediately inline. 2025-03-17T18:45:23.0407187Z 2025-03-17T18:45:23.0407537Z If the ``Future``'s value contains tensors that reside on GPUs, the 2025-03-17T18:45:23.0407961Z callback might be invoked while the async kernels that are populating 2025-03-17T18:45:23.0408380Z those tensors haven't yet finished executing on the device. However, the 2025-03-17T18:45:23.0408778Z callback will be invoked with some dedicated streams set as current 2025-03-17T18:45:23.0409154Z (fetched from a global pool) which will be synchronized with those 2025-03-17T18:45:23.0409584Z kernels. Hence any operation performed by the callback on these tensors 2025-03-17T18:45:23.0409971Z will be scheduled on the device after the kernels complete. In other 2025-03-17T18:45:23.0410410Z words, as long as the callback doesn't switch streams, it can safely 2025-03-17T18:45:23.0410831Z manipulate the result without any additional synchronization. This is 2025-03-17T18:45:23.0411129Z similar to the non-blocking behavior of :meth:`wait`. 2025-03-17T18:45:23.0411278Z 2025-03-17T18:45:23.0411687Z Similarly, if the callback returns a value that contains tensors that 2025-03-17T18:45:23.0412041Z reside on a GPU, it can do so even if the kernels that are producing 2025-03-17T18:45:23.0412458Z these tensors are still running on the device, as long as the callback 2025-03-17T18:45:23.0412830Z didn't change streams during its execution. If one wants to change 2025-03-17T18:45:23.0413236Z streams, one must be careful to re-synchronize them with the original 2025-03-17T18:45:23.0413634Z streams, that is, those that were current when the callback was invoked. 2025-03-17T18:45:23.0413800Z 2025-03-17T18:45:23.0413956Z Args: 2025-03-17T18:45:23.0414345Z callback(``Callable``): a ``Callable`` that takes this ``Future`` as 2025-03-17T18:45:23.0414536Z the only argument. 2025-03-17T18:45:23.0414698Z 2025-03-17T18:45:23.0414858Z Returns: 2025-03-17T18:45:23.0415168Z A new ``Future`` object that holds the return value of the 2025-03-17T18:45:23.0415495Z ``callback`` and will be marked as completed when the given 2025-03-17T18:45:23.0415692Z ``callback`` finishes. 2025-03-17T18:45:23.0415844Z 2025-03-17T18:45:23.0416164Z .. note:: Note that if the callback function throws, either 2025-03-17T18:45:23.0416587Z through the original future being completed with an exception and 2025-03-17T18:45:23.0416952Z calling ``fut.wait()``, or through other code in the callback, the 2025-03-17T18:45:23.0417329Z future returned by ``then`` will be marked appropriately with the 2025-03-17T18:45:23.0417704Z encountered error. However, if this callback later completes 2025-03-17T18:45:23.0418109Z additional futures, those futures are not marked as completed with 2025-03-17T18:45:23.0418496Z an error and the user is responsible for handling completion/waiting 2025-03-17T18:45:23.0418702Z on those futures independently. 2025-03-17T18:45:23.0418864Z 2025-03-17T18:45:23.0419082Z Example:: 2025-03-17T18:45:23.0419365Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_FUTURES) 2025-03-17T18:45:23.0419541Z >>> def callback(fut): 2025-03-17T18:45:23.0419802Z ... print(f"RPC return value is {fut.wait()}.") 2025-03-17T18:45:23.0420007Z >>> fut = torch.futures.Future() 2025-03-17T18:45:23.0420330Z >>> # The inserted callback will print the return value when 2025-03-17T18:45:23.0420550Z >>> # receiving the response from "worker1" 2025-03-17T18:45:23.0420745Z >>> cb_fut = fut.then(callback) 2025-03-17T18:45:23.0420947Z >>> chain_cb_fut = cb_fut.then( 2025-03-17T18:45:23.0421218Z ... lambda x : print(f"Chained cb done. {x.wait()}") 2025-03-17T18:45:23.0421382Z ... ) 2025-03-17T18:45:23.0421557Z >>> fut.set_result(5) 2025-03-17T18:45:23.0421755Z RPC return value is 5. 2025-03-17T18:45:23.0421929Z Chained cb done. None 2025-03-17T18:45:23.0422087Z 2025-03-17T18:45:23.0422554Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0422716Z 2025-03-17T18:45:23.0422889Z warnings.warn(msg) 2025-03-17T18:45:23.0423054Z 2025-03-17T18:45:23.0423382Z --- Parse Warning: 17 / 116 --- 2025-03-17T18:45:23.0425083Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Future.set_result in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/futures/__init__.py line=213. 2025-03-17T18:45:23.0425565Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0425722Z 2025-03-17T18:45:23.0426164Z Set the result for this ``Future``, which will mark this ``Future`` as 2025-03-17T18:45:23.0426668Z completed and trigger all attached callbacks. Note that a ``Future`` 2025-03-17T18:45:23.0426868Z cannot be marked completed twice. 2025-03-17T18:45:23.0427025Z 2025-03-17T18:45:23.0427424Z If the result contains tensors that reside on GPUs, this method can be 2025-03-17T18:45:23.0427809Z called even if the asynchronous kernels that are populating those 2025-03-17T18:45:23.0428216Z tensors haven't yet completed running on the device, provided that the 2025-03-17T18:45:23.0428647Z streams on which those kernels were enqueued are set as the current ones 2025-03-17T18:45:23.0429036Z when this method is called. Put simply, it's safe to call this method 2025-03-17T18:45:23.0429442Z immediately after launching those kernels, without any additional 2025-03-17T18:45:23.0429866Z synchronization, as long as one doesn't change streams in between. This 2025-03-17T18:45:23.0430294Z method will record events on all the relevant current streams and will 2025-03-17T18:45:23.0430668Z use them to ensure proper scheduling for all the consumers of this 2025-03-17T18:45:23.0430840Z ``Future``. 2025-03-17T18:45:23.0430989Z 2025-03-17T18:45:23.0431146Z Args: 2025-03-17T18:45:23.0431443Z result (object): the result object of this ``Future``. 2025-03-17T18:45:23.0431604Z 2025-03-17T18:45:23.0431762Z Example:: 2025-03-17T18:45:23.0432036Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_FUTURES) 2025-03-17T18:45:23.0432213Z >>> import threading 2025-03-17T18:45:23.0432427Z >>> import time 2025-03-17T18:45:23.0432633Z >>> def slow_set_future(fut, value): 2025-03-17T18:45:23.0432823Z ... time.sleep(0.5) 2025-03-17T18:45:23.0433009Z ... fut.set_result(value) 2025-03-17T18:45:23.0433224Z >>> fut = torch.futures.Future() 2025-03-17T18:45:23.0433403Z >>> t = threading.Thread( 2025-03-17T18:45:23.0433605Z ... target=slow_set_future, 2025-03-17T18:45:23.0433800Z ... args=(fut, torch.ones(2) * 3) 2025-03-17T18:45:23.0433962Z ... ) 2025-03-17T18:45:23.0434123Z >>> t.start() 2025-03-17T18:45:23.0434303Z >>> print(fut.wait()) 2025-03-17T18:45:23.0434521Z tensor([3., 3.]) 2025-03-17T18:45:23.0434678Z >>> t.join() 2025-03-17T18:45:23.0434841Z 2025-03-17T18:45:23.0435312Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0435468Z 2025-03-17T18:45:23.0435644Z warnings.warn(msg) 2025-03-17T18:45:23.0435809Z 2025-03-17T18:45:23.0436139Z --- Parse Warning: 18 / 116 --- 2025-03-17T18:45:23.0437931Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=compile_shader in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/mps/__init__.py line=144. 2025-03-17T18:45:23.0438424Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0438843Z Compiles compute shader from source and allows one to invoke kernels 2025-03-17T18:45:23.0439114Z defined there from the comfort of Python runtime 2025-03-17T18:45:23.0439289Z Example:: 2025-03-17T18:45:23.0439447Z 2025-03-17T18:45:23.0439705Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_MPS) 2025-03-17T18:45:23.0439923Z >>> lib = torch.mps.compile_shader( 2025-03-17T18:45:23.0440645Z ... "kernel void full(device float* out, constant float& val, uint idx [[thread_position_in_grid]]) { out[idx] = val; }" 2025-03-17T18:45:23.0440801Z ... ) 2025-03-17T18:45:23.0441024Z >>> x = torch.zeros(16, device="mps") 2025-03-17T18:45:23.0441201Z >>> lib.full(x, 3.14) 2025-03-17T18:45:23.0441363Z 2025-03-17T18:45:23.0441832Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0442117Z 2025-03-17T18:45:23.0442298Z warnings.warn(msg) 2025-03-17T18:45:23.0442459Z 2025-03-17T18:45:23.0442786Z --- Parse Warning: 19 / 116 --- 2025-03-17T18:45:23.0444357Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=sum in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/sparse/__init__.py line=202. 2025-03-17T18:45:23.0444847Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0445144Z Return the sum of each row of the given sparse tensor. 2025-03-17T18:45:23.0445295Z 2025-03-17T18:45:23.0445724Z Returns the sum of each row of the sparse tensor :attr:`input` in the given 2025-03-17T18:45:23.0446076Z dimensions :attr:`dim`. If :attr:`dim` is a list of dimensions, 2025-03-17T18:45:23.0446475Z reduce over all of them. When sum over all ``sparse_dim``, this method 2025-03-17T18:45:23.0446755Z returns a dense tensor instead of a sparse tensor. 2025-03-17T18:45:23.0446914Z 2025-03-17T18:45:23.0447383Z All summed :attr:`dim` are squeezed (see :func:`torch.squeeze`), resulting an output 2025-03-17T18:45:23.0447746Z tensor having :attr:`dim` fewer dimensions than :attr:`input`. 2025-03-17T18:45:23.0447899Z 2025-03-17T18:45:23.0448314Z During backward, only gradients at ``nnz`` locations of :attr:`input` 2025-03-17T18:45:23.0448756Z will propagate back. Note that the gradients of :attr:`input` is coalesced. 2025-03-17T18:45:23.0448915Z 2025-03-17T18:45:23.0449106Z Args: 2025-03-17T18:45:23.0449342Z input (Tensor): the input sparse tensor 2025-03-17T18:45:23.0449842Z dim (int or tuple of ints): a dimension or a list of dimensions to reduce. Default: reduce 2025-03-17T18:45:23.0450025Z over all dims. 2025-03-17T18:45:23.0450493Z dtype (:class:`torch.dtype`, optional): the desired data type of returned Tensor. 2025-03-17T18:45:23.0450717Z Default: dtype of :attr:`input`. 2025-03-17T18:45:23.0450864Z 2025-03-17T18:45:23.0451042Z Example:: 2025-03-17T18:45:23.0451187Z 2025-03-17T18:45:23.0451365Z >>> nnz = 3 2025-03-17T18:45:23.0451539Z >>> dims = [5, 5, 2, 3] 2025-03-17T18:45:23.0451863Z >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)), 2025-03-17T18:45:23.0452212Z torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz) 2025-03-17T18:45:23.0452429Z >>> V = torch.randn(nnz, dims[2], dims[3]) 2025-03-17T18:45:23.0452632Z >>> size = torch.Size(dims) 2025-03-17T18:45:23.0452882Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:23.0453114Z >>> S = torch.sparse_coo_tensor(I, V, size) 2025-03-17T18:45:23.0453268Z >>> S 2025-03-17T18:45:23.0453477Z tensor(indices=tensor([[2, 0, 3], 2025-03-17T18:45:23.0453654Z [2, 4, 1]]), 2025-03-17T18:45:23.0453894Z values=tensor([[[-0.6438, -1.6467, 1.4004], 2025-03-17T18:45:23.0454083Z [ 0.3411, 0.0918, -0.2312]], 2025-03-17T18:45:23.0454243Z 2025-03-17T18:45:23.0454433Z [[ 0.5348, 0.0634, -2.0494], 2025-03-17T18:45:23.0454640Z [-0.7125, -1.0646, 2.1844]], 2025-03-17T18:45:23.0454785Z 2025-03-17T18:45:23.0454988Z [[ 0.1276, 0.1874, -0.6334], 2025-03-17T18:45:23.0455183Z [-1.9682, -0.5340, 0.7483]]]), 2025-03-17T18:45:23.0455454Z size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo) 2025-03-17T18:45:23.0455604Z 2025-03-17T18:45:23.0455970Z # when sum over only part of sparse_dims, return a sparse tensor 2025-03-17T18:45:23.0456166Z >>> torch.sparse.sum(S, [1, 3]) 2025-03-17T18:45:23.0456452Z tensor(indices=tensor([[0, 2, 3]]), 2025-03-17T18:45:23.0456651Z values=tensor([[-1.4512, 0.4073], 2025-03-17T18:45:23.0456846Z [-0.8901, 0.2017], 2025-03-17T18:45:23.0457031Z [-0.3183, -1.7539]]), 2025-03-17T18:45:23.0457286Z size=(5, 2), nnz=3, layout=torch.sparse_coo) 2025-03-17T18:45:23.0457439Z 2025-03-17T18:45:23.0457725Z # when sum over all sparse dim, return a dense tensor 2025-03-17T18:45:23.0457916Z # with summed dims squeezed 2025-03-17T18:45:23.0458134Z >>> torch.sparse.sum(S, [0, 1, 3]) 2025-03-17T18:45:23.0458320Z tensor([-2.6596, -1.1450]) 2025-03-17T18:45:23.0458492Z 2025-03-17T18:45:23.0458961Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0459114Z 2025-03-17T18:45:23.0459286Z warnings.warn(msg) 2025-03-17T18:45:23.0459445Z 2025-03-17T18:45:23.0459779Z --- Parse Warning: 20 / 116 --- 2025-03-17T18:45:23.0461363Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=vmap in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_functorch/apis.py line=39. 2025-03-17T18:45:23.0461853Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0462018Z 2025-03-17T18:45:23.0462420Z vmap is the vectorizing map; ``vmap(func)`` returns a new function that 2025-03-17T18:45:23.0462791Z maps ``func`` over some dimension of the inputs. Semantically, vmap 2025-03-17T18:45:23.0463229Z pushes the map into PyTorch operations called by ``func``, effectively 2025-03-17T18:45:23.0463443Z vectorizing those operations. 2025-03-17T18:45:23.0463583Z 2025-03-17T18:45:23.0463980Z vmap is useful for handling batch dimensions: one can write a function 2025-03-17T18:45:23.0464357Z ``func`` that runs on examples and then lift it to a function that can 2025-03-17T18:45:23.0464762Z take batches of examples with ``vmap(func)``. vmap can also be used to 2025-03-17T18:45:23.0465065Z compute batched gradients when composed with autograd. 2025-03-17T18:45:23.0465223Z 2025-03-17T18:45:23.0465380Z .. note:: 2025-03-17T18:45:23.0465757Z :func:`torch.vmap` is aliased to :func:`torch.func.vmap` for 2025-03-17T18:45:23.0465991Z convenience. Use whichever one you'd like. 2025-03-17T18:45:23.0466159Z 2025-03-17T18:45:23.0466311Z Args: 2025-03-17T18:45:23.0466757Z func (function): A Python function that takes one or more arguments. 2025-03-17T18:45:23.0466971Z Must return one or more Tensors. 2025-03-17T18:45:23.0467330Z in_dims (int or nested structure): Specifies which dimension of the 2025-03-17T18:45:23.0467650Z inputs should be mapped over. ``in_dims`` should have a 2025-03-17T18:45:23.0468003Z structure like the inputs. If the ``in_dim`` for a particular 2025-03-17T18:45:23.0468351Z input is None, then that indicates there is no map dimension. 2025-03-17T18:45:23.0468512Z Default: 0. 2025-03-17T18:45:23.0468875Z out_dims (int or Tuple[int]): Specifies where the mapped dimension 2025-03-17T18:45:23.0469226Z should appear in the outputs. If ``out_dims`` is a Tuple, then 2025-03-17T18:45:23.0469503Z it should have one element per output. Default: 0. 2025-03-17T18:45:23.0469832Z randomness (str): Specifies whether the randomness in this 2025-03-17T18:45:23.0470223Z vmap should be the same or different across batches. If 'different', 2025-03-17T18:45:23.0470591Z the randomness for each batch will be different. If 'same', the 2025-03-17T18:45:23.0470992Z randomness will be the same across batches. If 'error', any calls to 2025-03-17T18:45:23.0471377Z random functions will error. Default: 'error'. WARNING: this flag 2025-03-17T18:45:23.0471830Z only applies to random PyTorch operations and does not apply to 2025-03-17T18:45:23.0472069Z Python's random module or numpy randomness. 2025-03-17T18:45:23.0472487Z chunk_size (None or int): If None (default), apply a single vmap over inputs. 2025-03-17T18:45:23.0472896Z If not None, then compute the vmap :attr:`chunk_size` samples at a time. 2025-03-17T18:45:23.0473363Z Note that :attr:`chunk_size=1` is equivalent to computing the vmap with a for-loop. 2025-03-17T18:45:23.0473863Z If you run into memory issues computing the vmap, please try a non-None chunk_size. 2025-03-17T18:45:23.0474011Z 2025-03-17T18:45:23.0474168Z Returns: 2025-03-17T18:45:23.0474525Z Returns a new "batched" function. It takes the same inputs as 2025-03-17T18:45:23.0474856Z ``func``, except each input has an extra dimension at the index 2025-03-17T18:45:23.0475209Z specified by ``in_dims``. It takes returns the same outputs as 2025-03-17T18:45:23.0475551Z ``func``, except each output has an extra dimension at the index 2025-03-17T18:45:23.0475744Z specified by ``out_dims``. 2025-03-17T18:45:23.0475892Z 2025-03-17T18:45:23.0476048Z .. warning: 2025-03-17T18:45:23.0476411Z :func:`vmap` works best with functional-style code. Please do not 2025-03-17T18:45:23.0476752Z perform any side-effects in ``func``, with the exception of 2025-03-17T18:45:23.0477179Z in-place PyTorch operations. Examples of side-effects include mutating 2025-03-17T18:45:23.0477601Z Python data structures and assigning values to variables not captured 2025-03-17T18:45:23.0477811Z in ``func``. 2025-03-17T18:45:23.0477965Z 2025-03-17T18:45:23.0478391Z One example of using :func:`vmap` is to compute batched dot products. PyTorch 2025-03-17T18:45:23.0478799Z doesn't provide a batched ``torch.dot`` API; instead of unsuccessfully 2025-03-17T18:45:23.0479201Z rummaging through docs, use :func:`vmap` to construct a new function. 2025-03-17T18:45:23.0479355Z 2025-03-17T18:45:23.0479602Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:23.0479971Z >>> batched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N] 2025-03-17T18:45:23.0480191Z >>> x, y = torch.randn(2, 5), torch.randn(2, 5) 2025-03-17T18:45:23.0480402Z >>> batched_dot(x, y) 2025-03-17T18:45:23.0480548Z 2025-03-17T18:45:23.0480972Z :func:`vmap` can be helpful in hiding batch dimensions, leading to a simpler 2025-03-17T18:45:23.0481160Z model authoring experience. 2025-03-17T18:45:23.0481316Z 2025-03-17T18:45:23.0481507Z >>> batch_size, feature_size = 3, 5 2025-03-17T18:45:23.0481821Z >>> weights = torch.randn(feature_size, requires_grad=True) 2025-03-17T18:45:23.0481966Z >>> 2025-03-17T18:45:23.0482170Z >>> def model(feature_vec): 2025-03-17T18:45:23.0482408Z >>> # Very simple linear model with activation 2025-03-17T18:45:23.0482656Z >>> return feature_vec.dot(weights).relu() 2025-03-17T18:45:23.0482799Z >>> 2025-03-17T18:45:23.0483072Z >>> examples = torch.randn(batch_size, feature_size) 2025-03-17T18:45:23.0483286Z >>> result = torch.vmap(model)(examples) 2025-03-17T18:45:23.0483435Z 2025-03-17T18:45:23.0483891Z :func:`vmap` can also help vectorize computations that were previously difficult 2025-03-17T18:45:23.0484326Z or impossible to batch. One example is higher-order gradient computation. 2025-03-17T18:45:23.0484750Z The PyTorch autograd engine computes vjps (vector-Jacobian products). 2025-03-17T18:45:23.0485184Z Computing a full Jacobian matrix for some function f: R^N -> R^N usually 2025-03-17T18:45:23.0485638Z requires N calls to ``autograd.grad``, one per Jacobian row. Using :func:`vmap`, 2025-03-17T18:45:23.0486087Z we can vectorize the whole computation, computing the Jacobian in a single 2025-03-17T18:45:23.0486270Z call to ``autograd.grad``. 2025-03-17T18:45:23.0486497Z 2025-03-17T18:45:23.0486654Z >>> # Setup 2025-03-17T18:45:23.0486819Z >>> N = 5 2025-03-17T18:45:23.0486994Z >>> f = lambda x: x ** 2 2025-03-17T18:45:23.0487223Z >>> x = torch.randn(N, requires_grad=True) 2025-03-17T18:45:23.0487387Z >>> y = f(x) 2025-03-17T18:45:23.0487557Z >>> I_N = torch.eye(N) 2025-03-17T18:45:23.0487713Z >>> 2025-03-17T18:45:23.0487897Z >>> # Sequential approach 2025-03-17T18:45:23.0488292Z >>> jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0] 2025-03-17T18:45:23.0488492Z >>> for v in I_N.unbind()] 2025-03-17T18:45:23.0488724Z >>> jacobian = torch.stack(jacobian_rows) 2025-03-17T18:45:23.0488871Z >>> 2025-03-17T18:45:23.0489095Z >>> # vectorized gradient computation 2025-03-17T18:45:23.0489264Z >>> def get_vjp(v): 2025-03-17T18:45:23.0489498Z >>> return torch.autograd.grad(y, x, v) 2025-03-17T18:45:23.0489712Z >>> jacobian = torch.vmap(get_vjp)(I_N) 2025-03-17T18:45:23.0489866Z 2025-03-17T18:45:23.0490346Z :func:`vmap` can also be nested, producing an output with multiple batched dimensions 2025-03-17T18:45:23.0490500Z 2025-03-17T18:45:23.0490745Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:23.0491250Z >>> batched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0] 2025-03-17T18:45:23.0491486Z >>> x, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5) 2025-03-17T18:45:23.0491723Z >>> batched_dot(x, y) # tensor of size [2, 3] 2025-03-17T18:45:23.0491907Z 2025-03-17T18:45:23.0492355Z If the inputs are not batched along the first dimension, ``in_dims`` specifies 2025-03-17T18:45:23.0492625Z the dimension that each inputs are batched along as 2025-03-17T18:45:23.0492786Z 2025-03-17T18:45:23.0493029Z >>> torch.dot # [N], [N] -> [] 2025-03-17T18:45:23.0493446Z >>> batched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D] 2025-03-17T18:45:23.0493672Z >>> x, y = torch.randn(2, 5), torch.randn(2, 5) 2025-03-17T18:45:23.0494117Z >>> batched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension 2025-03-17T18:45:23.0494305Z 2025-03-17T18:45:23.0494780Z If there are multiple inputs each of which is batched along different dimensions, 2025-03-17T18:45:23.0495131Z ``in_dims`` must be a tuple with the batch dimension for each input as 2025-03-17T18:45:23.0495284Z 2025-03-17T18:45:23.0495530Z >>> torch.dot # [D], [D] -> [] 2025-03-17T18:45:23.0495955Z >>> batched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N] 2025-03-17T18:45:23.0496169Z >>> x, y = torch.randn(2, 5), torch.randn(5) 2025-03-17T18:45:23.0496620Z >>> batched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None 2025-03-17T18:45:23.0496780Z 2025-03-17T18:45:23.0497212Z If the input is a Python struct, ``in_dims`` must be a tuple containing a struct 2025-03-17T18:45:23.0497417Z matching the shape of the input: 2025-03-17T18:45:23.0497573Z 2025-03-17T18:45:23.0497818Z >>> f = lambda dict: torch.dot(dict['x'], dict['y']) 2025-03-17T18:45:23.0498044Z >>> x, y = torch.randn(2, 5), torch.randn(5) 2025-03-17T18:45:23.0498218Z >>> input = {'x': x, 'y': y} 2025-03-17T18:45:23.0498538Z >>> batched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},)) 2025-03-17T18:45:23.0498713Z >>> batched_dot(input) 2025-03-17T18:45:23.0498871Z 2025-03-17T18:45:23.0499373Z By default, the output is batched along the first dimension. However, it can be batched 2025-03-17T18:45:23.0499606Z along any dimension by using ``out_dims`` 2025-03-17T18:45:23.0499758Z 2025-03-17T18:45:23.0499945Z >>> f = lambda x: x ** 2 2025-03-17T18:45:23.0500120Z >>> x = torch.randn(2, 5) 2025-03-17T18:45:23.0500402Z >>> batched_pow = torch.vmap(f, out_dims=1) 2025-03-17T18:45:23.0500592Z >>> batched_pow(x) # [5, 2] 2025-03-17T18:45:23.0500740Z 2025-03-17T18:45:23.0501298Z For any function that uses kwargs, the returned function will not batch the kwargs but will 2025-03-17T18:45:23.0501461Z accept kwargs 2025-03-17T18:45:23.0501623Z 2025-03-17T18:45:23.0501802Z >>> x = torch.randn([2, 5]) 2025-03-17T18:45:23.0501988Z >>> def fn(x, scale=4.): 2025-03-17T18:45:23.0502160Z >>> return x * scale 2025-03-17T18:45:23.0502321Z >>> 2025-03-17T18:45:23.0502526Z >>> batched_pow = torch.vmap(fn) 2025-03-17T18:45:23.0502777Z >>> assert torch.allclose(batched_pow(x), x * 4) 2025-03-17T18:45:23.0503184Z >>> batched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5] 2025-03-17T18:45:23.0503346Z 2025-03-17T18:45:23.0503505Z .. note:: 2025-03-17T18:45:23.0503923Z vmap does not provide general autobatching or handle variable-length 2025-03-17T18:45:23.0504107Z sequences out of the box. 2025-03-17T18:45:23.0504258Z 2025-03-17T18:45:23.0504723Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0504878Z 2025-03-17T18:45:23.0505056Z warnings.warn(msg) 2025-03-17T18:45:23.0505214Z 2025-03-17T18:45:23.0505559Z --- Parse Warning: 21 / 116 --- 2025-03-17T18:45:23.0507255Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=triton_op in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/triton.py line=21. 2025-03-17T18:45:23.0507779Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0508257Z Create a custom operator whose implementation is backed by 1+ triton kernels. 2025-03-17T18:45:23.0508407Z 2025-03-17T18:45:23.0508801Z This is a more structured way of using triton kernels with PyTorch. 2025-03-17T18:45:23.0509263Z Prefer using triton kernels with no ``torch.library`` custom operator wrappers 2025-03-17T18:45:23.0509728Z (like :func:`torch.library.custom_op`, :func:`torch.library.triton_op`) because 2025-03-17T18:45:23.0509933Z that is simpler; 2025-03-17T18:45:23.0510413Z only use :func:`torch.library.custom_op`/:func:`torch.library.triton_op` if you 2025-03-17T18:45:23.0510830Z want to create an operator that behaves like PyTorch built-in operators. 2025-03-17T18:45:23.0511231Z For example, you may use a ``torch.library`` wrapper API to define the 2025-03-17T18:45:23.0511623Z behavior of the triton kernel when passed a tensor subclass or under 2025-03-17T18:45:23.0511823Z a TorchDispatchMode. 2025-03-17T18:45:23.0511966Z 2025-03-17T18:45:23.0512436Z Use :func:`torch.library.triton_op` instead of :func:`torch.library.custom_op` 2025-03-17T18:45:23.0512627Z when the implementation 2025-03-17T18:45:23.0513035Z consists of 1+ triton kernels. :func:`torch.library.custom_op` treats 2025-03-17T18:45:23.0513333Z custom operators as opaque (:func:`torch.compile` and 2025-03-17T18:45:23.0513780Z :func:`torch.export.export` will never trace into them), but ``triton_op`` 2025-03-17T18:45:23.0514191Z makes the implementation visible to these subsystems, allowing them 2025-03-17T18:45:23.0514408Z to optimize the triton kernel(s). 2025-03-17T18:45:23.0514552Z 2025-03-17T18:45:23.0514901Z Note that ``fn`` must only consist of calls to PyTorch-understood 2025-03-17T18:45:23.0515327Z operators and triton kernels. Any triton kernels called inside ``fn`` 2025-03-17T18:45:23.0515690Z must be wrapped in a call to :func:`torch.library.wrap_triton`. 2025-03-17T18:45:23.0515840Z 2025-03-17T18:45:23.0516001Z Args: 2025-03-17T18:45:23.0516470Z name (str): A name for the custom op that looks like "{namespace}::{name}", 2025-03-17T18:45:23.0516860Z e.g. "mylib::my_linear". The name is used as the op's stable identifier 2025-03-17T18:45:23.0517159Z in PyTorch subsystems (e.g. torch.export, FX graphs). 2025-03-17T18:45:23.0517593Z To avoid name collisions, please use your project name as the namespace; 2025-03-17T18:45:23.0517970Z e.g. all custom ops in pytorch/fbgemm use "fbgemm" as the namespace. 2025-03-17T18:45:23.0518487Z mutates_args (Iterable[str] or "unknown"): The names of args that the function mutates. 2025-03-17T18:45:23.0518921Z This MUST be accurate, otherwise, the behavior is undefined. If "unknown", 2025-03-17T18:45:23.0519420Z it pessimistically assumes that all inputs to the operator are being mutated. 2025-03-17T18:45:23.0519755Z schema (None | str): A schema string for the operator. If None 2025-03-17T18:45:23.0520146Z (recommended) we'll infer a schema for the operator from its type 2025-03-17T18:45:23.0520522Z annotations. We recommend letting us infer a schema unless you 2025-03-17T18:45:23.0520740Z have a specific reason not to. 2025-03-17T18:45:23.0521004Z Example: "(Tensor x, int y) -> (Tensor, Tensor)". 2025-03-17T18:45:23.0521170Z 2025-03-17T18:45:23.0521331Z Example:: 2025-03-17T18:45:23.0521492Z 2025-03-17T18:45:23.0521736Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.0521919Z >>> import torch 2025-03-17T18:45:23.0522200Z >>> from torch.library import triton_op, wrap_triton 2025-03-17T18:45:23.0522404Z >>> 2025-03-17T18:45:23.0522575Z >>> import triton 2025-03-17T18:45:23.0522795Z >>> from triton import language as tl 2025-03-17T18:45:23.0522947Z >>> 2025-03-17T18:45:23.0523110Z >>> @triton.jit 2025-03-17T18:45:23.0523285Z >>> def add_kernel( 2025-03-17T18:45:23.0524998Z >>> in_ptr0, 2025-03-17T18:45:23.0525163Z >>> in_ptr1, 2025-03-17T18:45:23.0525319Z >>> out_ptr, 2025-03-17T18:45:23.0525501Z >>> n_elements, 2025-03-17T18:45:23.0525698Z >>> BLOCK_SIZE: "tl.constexpr", 2025-03-17T18:45:23.0525888Z >>> ): 2025-03-17T18:45:23.0526086Z >>> pid = tl.program_id(axis=0) 2025-03-17T18:45:23.0526295Z >>> block_start = pid * BLOCK_SIZE 2025-03-17T18:45:23.0526559Z >>> offsets = block_start + tl.arange(0, BLOCK_SIZE) 2025-03-17T18:45:23.0526773Z >>> mask = offsets < n_elements 2025-03-17T18:45:23.0527001Z >>> x = tl.load(in_ptr0 + offsets, mask=mask) 2025-03-17T18:45:23.0527238Z >>> y = tl.load(in_ptr1 + offsets, mask=mask) 2025-03-17T18:45:23.0527413Z >>> output = x + y 2025-03-17T18:45:23.0527683Z >>> tl.store(out_ptr + offsets, output, mask=mask) 2025-03-17T18:45:23.0527843Z >>> 2025-03-17T18:45:23.0528089Z >>> @triton_op("mylib::add", mutates_args={}) 2025-03-17T18:45:23.0528410Z >>> def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: 2025-03-17T18:45:23.0528631Z >>> output = torch.empty_like(x) 2025-03-17T18:45:23.0528838Z >>> n_elements = output.numel() 2025-03-17T18:45:23.0529004Z >>> 2025-03-17T18:45:23.0529182Z >>> def grid(meta): 2025-03-17T18:45:23.0529501Z >>> return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) 2025-03-17T18:45:23.0529660Z >>> 2025-03-17T18:45:23.0530001Z >>> # NB: we need to wrap the triton kernel in a call to wrap_triton 2025-03-17T18:45:23.0530339Z >>> wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16) 2025-03-17T18:45:23.0530526Z >>> return output 2025-03-17T18:45:23.0530678Z >>> 2025-03-17T18:45:23.0530958Z >>> @torch.compile 2025-03-17T18:45:23.0531129Z >>> def f(x, y): 2025-03-17T18:45:23.0531324Z >>> return add(x, y) 2025-03-17T18:45:23.0531477Z >>> 2025-03-17T18:45:23.0531694Z >>> x = torch.randn(3, device="cuda") 2025-03-17T18:45:23.0531906Z >>> y = torch.randn(3, device="cuda") 2025-03-17T18:45:23.0532075Z >>> 2025-03-17T18:45:23.0532238Z >>> z = f(x, y) 2025-03-17T18:45:23.0532465Z >>> assert torch.allclose(z, x + y) 2025-03-17T18:45:23.0532619Z 2025-03-17T18:45:23.0532787Z 2025-03-17T18:45:23.0533262Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0533412Z 2025-03-17T18:45:23.0533605Z warnings.warn(msg) 2025-03-17T18:45:23.0533757Z 2025-03-17T18:45:23.0534114Z --- Parse Warning: 22 / 116 --- 2025-03-17T18:45:23.0535768Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=wrap_triton in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_library/triton.py line=202. 2025-03-17T18:45:23.0536272Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0536634Z Allows capture of a triton kernel into a graph via make_fx or 2025-03-17T18:45:23.0537018Z non-strict ``torch.export``. 2025-03-17T18:45:23.0537163Z 2025-03-17T18:45:23.0537519Z These technologies perform Dispatcher-based tracing (via 2025-03-17T18:45:23.0537887Z ``__torch_dispatch__``) and cannot see calls to raw triton kernels. 2025-03-17T18:45:23.0538334Z The ``wrap_triton`` API wraps a triton kernel into a callable that 2025-03-17T18:45:23.0538543Z can actually be traced into a graph. 2025-03-17T18:45:23.0538708Z 2025-03-17T18:45:23.0539096Z Please use this API together with :func:`torch.library.triton_op`. 2025-03-17T18:45:23.0539262Z 2025-03-17T18:45:23.0539436Z Examples: 2025-03-17T18:45:23.0539598Z 2025-03-17T18:45:23.0539775Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0539960Z >>> import torch 2025-03-17T18:45:23.0540132Z >>> import triton 2025-03-17T18:45:23.0540365Z >>> from triton import language as tl 2025-03-17T18:45:23.0540745Z >>> from torch.fx.experimental.proxy_tensor import make_fx 2025-03-17T18:45:23.0540990Z >>> from torch.library import wrap_triton 2025-03-17T18:45:23.0541141Z >>> 2025-03-17T18:45:23.0541314Z >>> @triton.jit 2025-03-17T18:45:23.0541490Z >>> def add_kernel( 2025-03-17T18:45:23.0541669Z >>> in_ptr0, 2025-03-17T18:45:23.0541834Z >>> in_ptr1, 2025-03-17T18:45:23.0542005Z >>> out_ptr, 2025-03-17T18:45:23.0542174Z >>> n_elements, 2025-03-17T18:45:23.0542393Z >>> BLOCK_SIZE: "tl.constexpr", 2025-03-17T18:45:23.0542546Z >>> ): 2025-03-17T18:45:23.0542769Z >>> pid = tl.program_id(axis=0) 2025-03-17T18:45:23.0542969Z >>> block_start = pid * BLOCK_SIZE 2025-03-17T18:45:23.0543249Z >>> offsets = block_start + tl.arange(0, BLOCK_SIZE) 2025-03-17T18:45:23.0543443Z >>> mask = offsets < n_elements 2025-03-17T18:45:23.0543691Z >>> x = tl.load(in_ptr0 + offsets, mask=mask) 2025-03-17T18:45:23.0543920Z >>> y = tl.load(in_ptr1 + offsets, mask=mask) 2025-03-17T18:45:23.0544112Z >>> output = x + y 2025-03-17T18:45:23.0544370Z >>> tl.store(out_ptr + offsets, output, mask=mask) 2025-03-17T18:45:23.0544529Z >>> 2025-03-17T18:45:23.0544698Z >>> def add(x, y): 2025-03-17T18:45:23.0544905Z >>> output = torch.empty_like(x) 2025-03-17T18:45:23.0545123Z >>> n_elements = output.numel() 2025-03-17T18:45:23.0545277Z >>> 2025-03-17T18:45:23.0545557Z >>> def grid_fn(meta): 2025-03-17T18:45:23.0545868Z >>> return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) 2025-03-17T18:45:23.0546031Z >>> 2025-03-17T18:45:23.0546388Z >>> wrap_triton(add_kernel)[grid_fn](x, y, output, n_elements, 16) 2025-03-17T18:45:23.0546630Z >>> return output 2025-03-17T18:45:23.0546782Z >>> 2025-03-17T18:45:23.0546996Z >>> x = torch.randn(3, device="cuda") 2025-03-17T18:45:23.0547205Z >>> y = torch.randn(3, device="cuda") 2025-03-17T18:45:23.0547402Z >>> gm = make_fx(add)(x, y) 2025-03-17T18:45:23.0547583Z >>> print(gm.code) 2025-03-17T18:45:23.0547795Z >>> # def forward(self, x_1, y_1): 2025-03-17T18:45:23.0548228Z >>> # empty_like = torch.ops.aten.empty_like.default(x_1, pin_memory = False) 2025-03-17T18:45:23.0548685Z >>> # triton_kernel_wrapper_mutation_proxy = triton_kernel_wrapper_mutation( 2025-03-17T18:45:23.0548917Z >>> # kernel_idx = 0, constant_args_idx = 0, 2025-03-17T18:45:23.0549132Z >>> # grid = [(1, 1, 1)], kwargs = { 2025-03-17T18:45:23.0549401Z >>> # 'in_ptr0': x_1, 'in_ptr1': y_1, 'out_ptr': empty_like, 2025-03-17T18:45:23.0549635Z >>> # 'n_elements': 3, 'BLOCK_SIZE': 16 2025-03-17T18:45:23.0549801Z >>> # }) 2025-03-17T18:45:23.0549994Z >>> # return empty_like 2025-03-17T18:45:23.0550143Z 2025-03-17T18:45:23.0550307Z 2025-03-17T18:45:23.0550776Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0550970Z 2025-03-17T18:45:23.0551149Z warnings.warn(msg) 2025-03-17T18:45:23.0551312Z 2025-03-17T18:45:23.0551661Z --- Parse Warning: 23 / 116 --- 2025-03-17T18:45:23.0553441Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assert_almost_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=326. 2025-03-17T18:45:23.0553939Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0554096Z 2025-03-17T18:45:23.0554473Z Raises an AssertionError if two items are not equal up to desired 2025-03-17T18:45:23.0554679Z precision. 2025-03-17T18:45:23.0554829Z 2025-03-17T18:45:23.0555154Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:23.0555482Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:23.0555834Z instead of this function for more consistent floating point 2025-03-17T18:45:23.0556005Z comparisons. 2025-03-17T18:45:23.0556163Z 2025-03-17T18:45:23.0556555Z The test verifies that the elements of `actual` and `desired` satisfy. 2025-03-17T18:45:23.0556718Z 2025-03-17T18:45:23.0557007Z ``abs(desired-actual) < float64(1.5 * 10**(-decimal))`` 2025-03-17T18:45:23.0557175Z 2025-03-17T18:45:23.0557599Z That is a looser test than originally documented, but agrees with what the 2025-03-17T18:45:23.0558051Z actual implementation in `assert_array_almost_equal` did up to rounding 2025-03-17T18:45:23.0558488Z vagaries. An exception is raised at conflicting values. For ndarrays this 2025-03-17T18:45:23.0558718Z delegates to assert_array_almost_equal 2025-03-17T18:45:23.0558871Z 2025-03-17T18:45:23.0559029Z Parameters 2025-03-17T18:45:23.0559206Z ---------- 2025-03-17T18:45:23.0559374Z actual : array_like 2025-03-17T18:45:23.0559567Z The object to check. 2025-03-17T18:45:23.0559737Z desired : array_like 2025-03-17T18:45:23.0559931Z The expected object. 2025-03-17T18:45:23.0560109Z decimal : int, optional 2025-03-17T18:45:23.0560333Z Desired precision, default is 7. 2025-03-17T18:45:23.0560513Z err_msg : str, optional 2025-03-17T18:45:23.0560875Z The error message to be printed in case of failure. 2025-03-17T18:45:23.0561062Z verbose : bool, optional 2025-03-17T18:45:23.0561450Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:23.0561603Z 2025-03-17T18:45:23.0561766Z Raises 2025-03-17T18:45:23.0561929Z ------ 2025-03-17T18:45:23.0562113Z AssertionError 2025-03-17T18:45:23.0562468Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:23.0562625Z 2025-03-17T18:45:23.0562778Z See Also 2025-03-17T18:45:23.0562947Z -------- 2025-03-17T18:45:23.0563389Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:23.0563632Z relative and/or absolute precision. 2025-03-17T18:45:23.0564020Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:23.0564183Z 2025-03-17T18:45:23.0564338Z Examples 2025-03-17T18:45:23.0564505Z -------- 2025-03-17T18:45:23.0564807Z >>> from torch._numpy.testing import assert_almost_equal 2025-03-17T18:45:23.0565045Z >>> assert_almost_equal(2.3333333333333, 2.33333334) 2025-03-17T18:45:23.0565365Z >>> assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) 2025-03-17T18:45:23.0565570Z Traceback (most recent call last): 2025-03-17T18:45:23.0565734Z ... 2025-03-17T18:45:23.0565902Z AssertionError: 2025-03-17T18:45:23.0566133Z Arrays are not almost equal to 10 decimals 2025-03-17T18:45:23.0566304Z ACTUAL: 2.3333333333333 2025-03-17T18:45:23.0566486Z DESIRED: 2.33333334 2025-03-17T18:45:23.0566663Z 2025-03-17T18:45:23.0566938Z >>> assert_almost_equal(np.array([1.0,2.3333333333333]), 2025-03-17T18:45:23.0567158Z ... np.array([1.0,2.33333334]), decimal=9) 2025-03-17T18:45:23.0567370Z Traceback (most recent call last): 2025-03-17T18:45:23.0567521Z ... 2025-03-17T18:45:23.0567701Z AssertionError: 2025-03-17T18:45:23.0567930Z Arrays are not almost equal to 9 decimals 2025-03-17T18:45:23.0568099Z 2025-03-17T18:45:23.0568289Z Mismatched elements: 1 / 2 (50%) 2025-03-17T18:45:23.0568532Z Max absolute difference: 6.666699636781459e-09 2025-03-17T18:45:23.0568769Z Max relative difference: 2.8571569790287484e-09 2025-03-17T18:45:23.0569059Z x: torch.ndarray([1.0000, 2.3333], dtype=float64) 2025-03-17T18:45:23.0569303Z y: torch.ndarray([1.0000, 2.3333], dtype=float64) 2025-03-17T18:45:23.0569466Z 2025-03-17T18:45:23.0569613Z 2025-03-17T18:45:23.0570096Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0570248Z 2025-03-17T18:45:23.0570433Z warnings.warn(msg) 2025-03-17T18:45:23.0570582Z 2025-03-17T18:45:23.0570933Z --- Parse Warning: 24 / 116 --- 2025-03-17T18:45:23.0572685Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assert_approx_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=451. 2025-03-17T18:45:23.0573184Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0573333Z 2025-03-17T18:45:23.0573752Z Raises an AssertionError if two items are not equal up to significant 2025-03-17T18:45:23.0573908Z digits. 2025-03-17T18:45:23.0574066Z 2025-03-17T18:45:23.0574370Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:23.0574704Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:23.0575048Z instead of this function for more consistent floating point 2025-03-17T18:45:23.0575231Z comparisons. 2025-03-17T18:45:23.0575379Z 2025-03-17T18:45:23.0575728Z Given two numbers, check that they are approximately equal. 2025-03-17T18:45:23.0576134Z Approximately equal is defined as the number of significant digits 2025-03-17T18:45:23.0576294Z that agree. 2025-03-17T18:45:23.0576518Z 2025-03-17T18:45:23.0576682Z Parameters 2025-03-17T18:45:23.0576853Z ---------- 2025-03-17T18:45:23.0577021Z actual : scalar 2025-03-17T18:45:23.0577215Z The object to check. 2025-03-17T18:45:23.0577382Z desired : scalar 2025-03-17T18:45:23.0577573Z The expected object. 2025-03-17T18:45:23.0577764Z significant : int, optional 2025-03-17T18:45:23.0577983Z Desired precision, default is 7. 2025-03-17T18:45:23.0578161Z err_msg : str, optional 2025-03-17T18:45:23.0578452Z The error message to be printed in case of failure. 2025-03-17T18:45:23.0578639Z verbose : bool, optional 2025-03-17T18:45:23.0579031Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:23.0579184Z 2025-03-17T18:45:23.0579347Z Raises 2025-03-17T18:45:23.0579508Z ------ 2025-03-17T18:45:23.0579692Z AssertionError 2025-03-17T18:45:23.0580047Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:23.0580213Z 2025-03-17T18:45:23.0580369Z See Also 2025-03-17T18:45:23.0580536Z -------- 2025-03-17T18:45:23.0580971Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:23.0581214Z relative and/or absolute precision. 2025-03-17T18:45:23.0581604Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:23.0581765Z 2025-03-17T18:45:23.0581916Z Examples 2025-03-17T18:45:23.0582070Z -------- 2025-03-17T18:45:23.0582590Z >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) # doctest: +SKIP 2025-03-17T18:45:23.0583123Z >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, # doctest: +SKIP 2025-03-17T18:45:23.0583342Z ... significant=8) 2025-03-17T18:45:23.0583801Z >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, # doctest: +SKIP 2025-03-17T18:45:23.0584029Z ... significant=8) 2025-03-17T18:45:23.0584230Z Traceback (most recent call last): 2025-03-17T18:45:23.0584388Z ... 2025-03-17T18:45:23.0584559Z AssertionError: 2025-03-17T18:45:23.0584806Z Items are not equal to 8 significant digits: 2025-03-17T18:45:23.0585007Z ACTUAL: 1.234567e-21 2025-03-17T18:45:23.0585195Z DESIRED: 1.2345672e-21 2025-03-17T18:45:23.0585342Z 2025-03-17T18:45:23.0585646Z the evaluated condition that raises the exception is 2025-03-17T18:45:23.0585787Z 2025-03-17T18:45:23.0586107Z >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) 2025-03-17T18:45:23.0586264Z True 2025-03-17T18:45:23.0586417Z 2025-03-17T18:45:23.0586636Z 2025-03-17T18:45:23.0587120Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0587265Z 2025-03-17T18:45:23.0587458Z warnings.warn(msg) 2025-03-17T18:45:23.0587606Z 2025-03-17T18:45:23.0587968Z --- Parse Warning: 25 / 116 --- 2025-03-17T18:45:23.0589719Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assert_array_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=730. 2025-03-17T18:45:23.0590220Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0590368Z 2025-03-17T18:45:23.0590759Z Raises an AssertionError if two array_like objects are not equal. 2025-03-17T18:45:23.0590906Z 2025-03-17T18:45:23.0591296Z Given two array_like objects, check that the shape is equal and all 2025-03-17T18:45:23.0591701Z elements of these objects are equal (but see the Notes for the special 2025-03-17T18:45:23.0592078Z handling of a scalar). An exception is raised at shape mismatch or 2025-03-17T18:45:23.0592489Z conflicting values. In contrast to the standard usage in numpy, NaNs 2025-03-17T18:45:23.0592990Z are compared like numbers, no assertion is raised if both objects have 2025-03-17T18:45:23.0593179Z NaNs in the same positions. 2025-03-17T18:45:23.0593337Z 2025-03-17T18:45:23.0593756Z The usual caution for verifying equality with floating point numbers is 2025-03-17T18:45:23.0593925Z advised. 2025-03-17T18:45:23.0594075Z 2025-03-17T18:45:23.0594249Z Parameters 2025-03-17T18:45:23.0594404Z ---------- 2025-03-17T18:45:23.0594568Z x : array_like 2025-03-17T18:45:23.0594778Z The actual object to check. 2025-03-17T18:45:23.0594937Z y : array_like 2025-03-17T18:45:23.0595148Z The desired, expected object. 2025-03-17T18:45:23.0595325Z err_msg : str, optional 2025-03-17T18:45:23.0595614Z The error message to be printed in case of failure. 2025-03-17T18:45:23.0595796Z verbose : bool, optional 2025-03-17T18:45:23.0607007Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:23.0607278Z strict : bool, optional 2025-03-17T18:45:23.0607673Z If True, raise an AssertionError when either the shape or the data 2025-03-17T18:45:23.0607991Z type of the array_like objects does not match. The special 2025-03-17T18:45:23.0608387Z handling for scalars mentioned in the Notes section is disabled. 2025-03-17T18:45:23.0608544Z 2025-03-17T18:45:23.0608713Z Raises 2025-03-17T18:45:23.0608872Z ------ 2025-03-17T18:45:23.0609056Z AssertionError 2025-03-17T18:45:23.0609303Z If actual and desired objects are not equal. 2025-03-17T18:45:23.0609461Z 2025-03-17T18:45:23.0609717Z See Also 2025-03-17T18:45:23.0609887Z -------- 2025-03-17T18:45:23.0610321Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:23.0610569Z relative and/or absolute precision. 2025-03-17T18:45:23.0610958Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:23.0611122Z 2025-03-17T18:45:23.0611283Z Notes 2025-03-17T18:45:23.0611434Z ----- 2025-03-17T18:45:23.0611778Z When one of `x` and `y` is a scalar and the other is array_like, the 2025-03-17T18:45:23.0612190Z function checks that each element of the array_like object is equal to 2025-03-17T18:45:23.0612613Z the scalar. This behaviour can be disabled with the `strict` parameter. 2025-03-17T18:45:23.0612802Z 2025-03-17T18:45:23.0612965Z Examples 2025-03-17T18:45:23.0613123Z -------- 2025-03-17T18:45:23.0613375Z The first assert does not raise an exception: 2025-03-17T18:45:23.0613522Z 2025-03-17T18:45:23.0613813Z >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:23.0614024Z ... [np.exp(0),2.33333, np.nan]) 2025-03-17T18:45:23.0614185Z 2025-03-17T18:45:23.0614591Z Use `assert_allclose` or one of the nulp (number of floating point values) 2025-03-17T18:45:23.0614807Z functions for these cases instead: 2025-03-17T18:45:23.0614955Z 2025-03-17T18:45:23.0615226Z >>> np.testing.assert_allclose([1.0,np.pi,np.nan], 2025-03-17T18:45:23.0615448Z ... [1, np.sqrt(np.pi)**2, np.nan], 2025-03-17T18:45:23.0615655Z ... rtol=1e-10, atol=0) 2025-03-17T18:45:23.0615803Z 2025-03-17T18:45:23.0616187Z As mentioned in the Notes section, `assert_array_equal` has special 2025-03-17T18:45:23.0616597Z handling for scalars. Here the test checks that each value in `x` is 3: 2025-03-17T18:45:23.0616759Z 2025-03-17T18:45:23.0616952Z >>> x = np.full((2, 5), fill_value=3) 2025-03-17T18:45:23.0617181Z >>> np.testing.assert_array_equal(x, 3) 2025-03-17T18:45:23.0617327Z 2025-03-17T18:45:23.0617727Z Use `strict` to raise an AssertionError when comparing a scalar with an 2025-03-17T18:45:23.0617879Z array: 2025-03-17T18:45:23.0618043Z 2025-03-17T18:45:23.0618311Z >>> np.testing.assert_array_equal(x, 3, strict=True) 2025-03-17T18:45:23.0618592Z Traceback (most recent call last): 2025-03-17T18:45:23.0618752Z ... 2025-03-17T18:45:23.0618933Z AssertionError: 2025-03-17T18:45:23.0619111Z Arrays are not equal 2025-03-17T18:45:23.0619269Z 2025-03-17T18:45:23.0619464Z (shapes (2, 5), () mismatch) 2025-03-17T18:45:23.0619659Z x: torch.ndarray([[3, 3, 3, 3, 3], 2025-03-17T18:45:23.0619828Z [3, 3, 3, 3, 3]]) 2025-03-17T18:45:23.0620003Z y: torch.ndarray(3) 2025-03-17T18:45:23.0620161Z 2025-03-17T18:45:23.0620549Z The `strict` parameter also ensures that the array data types match: 2025-03-17T18:45:23.0620713Z 2025-03-17T18:45:23.0620888Z >>> x = np.array([2, 2, 2]) 2025-03-17T18:45:23.0621123Z >>> y = np.array([2., 2., 2.], dtype=np.float32) 2025-03-17T18:45:23.0621391Z >>> np.testing.assert_array_equal(x, y, strict=True) 2025-03-17T18:45:23.0621602Z Traceback (most recent call last): 2025-03-17T18:45:23.0621756Z ... 2025-03-17T18:45:23.0621933Z AssertionError: 2025-03-17T18:45:23.0622114Z Arrays are not equal 2025-03-17T18:45:23.0622284Z 2025-03-17T18:45:23.0622543Z (dtypes dtype("int64"), dtype("float32") mismatch) 2025-03-17T18:45:23.0622734Z x: torch.ndarray([2, 2, 2]) 2025-03-17T18:45:23.0622927Z y: torch.ndarray([2., 2., 2.]) 2025-03-17T18:45:23.0623087Z 2025-03-17T18:45:23.0623557Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0623723Z 2025-03-17T18:45:23.0623900Z warnings.warn(msg) 2025-03-17T18:45:23.0624060Z 2025-03-17T18:45:23.0624448Z --- Parse Warning: 26 / 116 --- 2025-03-17T18:45:23.0626293Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assert_array_almost_equal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=836. 2025-03-17T18:45:23.0626869Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0627042Z 2025-03-17T18:45:23.0627440Z Raises an AssertionError if two objects are not equal up to desired 2025-03-17T18:45:23.0627613Z precision. 2025-03-17T18:45:23.0627767Z 2025-03-17T18:45:23.0628089Z .. note:: It is recommended to use one of `assert_allclose`, 2025-03-17T18:45:23.0628458Z `assert_array_almost_equal_nulp` or `assert_array_max_ulp` 2025-03-17T18:45:23.0628808Z instead of this function for more consistent floating point 2025-03-17T18:45:23.0628982Z comparisons. 2025-03-17T18:45:23.0629132Z 2025-03-17T18:45:23.0629594Z The test verifies identical shapes and that the elements of ``actual`` and 2025-03-17T18:45:23.0629765Z ``desired`` satisfy. 2025-03-17T18:45:23.0629924Z 2025-03-17T18:45:23.0630155Z ``abs(desired-actual) < 1.5 * 10**(-decimal)`` 2025-03-17T18:45:23.0630317Z 2025-03-17T18:45:23.0630736Z That is a looser test than originally documented, but agrees with what the 2025-03-17T18:45:23.0631201Z actual implementation did up to rounding vagaries. An exception is raised 2025-03-17T18:45:23.0631638Z at shape mismatch or conflicting values. In contrast to the standard usage 2025-03-17T18:45:23.0632052Z in numpy, NaNs are compared like numbers, no assertion is raised if both 2025-03-17T18:45:23.0632276Z objects have NaNs in the same positions. 2025-03-17T18:45:23.0632433Z 2025-03-17T18:45:23.0632599Z Parameters 2025-03-17T18:45:23.0632771Z ---------- 2025-03-17T18:45:23.0632929Z x : array_like 2025-03-17T18:45:23.0633136Z The actual object to check. 2025-03-17T18:45:23.0633295Z y : array_like 2025-03-17T18:45:23.0633507Z The desired, expected object. 2025-03-17T18:45:23.0633685Z decimal : int, optional 2025-03-17T18:45:23.0633907Z Desired precision, default is 6. 2025-03-17T18:45:23.0634083Z err_msg : str, optional 2025-03-17T18:45:23.0634368Z The error message to be printed in case of failure. 2025-03-17T18:45:23.0634610Z verbose : bool, optional 2025-03-17T18:45:23.0634989Z If True, the conflicting values are appended to the error message. 2025-03-17T18:45:23.0635135Z 2025-03-17T18:45:23.0635302Z Raises 2025-03-17T18:45:23.0635458Z ------ 2025-03-17T18:45:23.0635634Z AssertionError 2025-03-17T18:45:23.0635993Z If actual and desired are not equal up to specified precision. 2025-03-17T18:45:23.0636155Z 2025-03-17T18:45:23.0636302Z See Also 2025-03-17T18:45:23.0636454Z -------- 2025-03-17T18:45:23.0637094Z assert_allclose: Compare two array_like objects for equality with desired 2025-03-17T18:45:23.0637327Z relative and/or absolute precision. 2025-03-17T18:45:23.0637719Z assert_array_almost_equal_nulp, assert_array_max_ulp, assert_equal 2025-03-17T18:45:23.0637867Z 2025-03-17T18:45:23.0638030Z Examples 2025-03-17T18:45:23.0638185Z -------- 2025-03-17T18:45:23.0638431Z the first assert does not raise an exception 2025-03-17T18:45:23.0638587Z 2025-03-17T18:45:23.0638904Z >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], 2025-03-17T18:45:23.0639107Z ... [1.0,2.333,np.nan]) 2025-03-17T18:45:23.0639258Z 2025-03-17T18:45:23.0639578Z >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:23.0639803Z ... [1.0,2.33339,np.nan], decimal=5) 2025-03-17T18:45:23.0640010Z Traceback (most recent call last): 2025-03-17T18:45:23.0640168Z ... 2025-03-17T18:45:23.0640342Z AssertionError: 2025-03-17T18:45:23.0640647Z Arrays are not almost equal to 5 decimals 2025-03-17T18:45:23.0640809Z 2025-03-17T18:45:23.0641012Z Mismatched elements: 1 / 3 (33.3%) 2025-03-17T18:45:23.0641244Z Max absolute difference: 5.999999999994898e-05 2025-03-17T18:45:23.0641491Z Max relative difference: 2.5713661239633743e-05 2025-03-17T18:45:23.0641787Z x: torch.ndarray([1.0000, 2.3333, nan], dtype=float64) 2025-03-17T18:45:23.0642085Z y: torch.ndarray([1.0000, 2.3334, nan], dtype=float64) 2025-03-17T18:45:23.0642233Z 2025-03-17T18:45:23.0642562Z >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], 2025-03-17T18:45:23.0642815Z ... [1.0,2.33333, 5], decimal=5) 2025-03-17T18:45:23.0643036Z Traceback (most recent call last): 2025-03-17T18:45:23.0643183Z ... 2025-03-17T18:45:23.0643368Z AssertionError: 2025-03-17T18:45:23.0643585Z Arrays are not almost equal to 5 decimals 2025-03-17T18:45:23.0643763Z 2025-03-17T18:45:23.0643952Z x and y nan location mismatch: 2025-03-17T18:45:23.0644254Z x: torch.ndarray([1.0000, 2.3333, nan], dtype=float64) 2025-03-17T18:45:23.0644531Z y: torch.ndarray([1.0000, 2.3333, 5.0000], dtype=float64) 2025-03-17T18:45:23.0644684Z 2025-03-17T18:45:23.0644845Z 2025-03-17T18:45:23.0645317Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0645478Z 2025-03-17T18:45:23.0645657Z warnings.warn(msg) 2025-03-17T18:45:23.0645812Z 2025-03-17T18:45:23.0646172Z --- Parse Warning: 27 / 116 --- 2025-03-17T18:45:23.0647990Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=clear_and_catch_warnings in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_numpy/testing/utils.py line=1786. 2025-03-17T18:45:23.0648479Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0648892Z Context manager that resets warning registry for catching warnings 2025-03-17T18:45:23.0649046Z 2025-03-17T18:45:23.0649507Z Warnings can be slippery, because, whenever a warning is triggered, Python 2025-03-17T18:45:23.0649921Z adds a ``__warningregistry__`` member to the *calling* module. This makes 2025-03-17T18:45:23.0650460Z it impossible to retrigger the warning in this module, whatever you put in 2025-03-17T18:45:23.0650902Z the warnings filters. This context manager accepts a sequence of `modules` 2025-03-17T18:45:23.0651163Z as a keyword argument to its constructor and: 2025-03-17T18:45:23.0651313Z 2025-03-17T18:45:23.0651735Z * stores and removes any ``__warningregistry__`` entries in given `modules` 2025-03-17T18:45:23.0651887Z on entry; 2025-03-17T18:45:23.0652246Z * resets ``__warningregistry__`` to its previous state on exit. 2025-03-17T18:45:23.0652393Z 2025-03-17T18:45:23.0652822Z This makes it possible to trigger any warning afresh inside the context 2025-03-17T18:45:23.0653148Z manager without disturbing the state of warnings outside. 2025-03-17T18:45:23.0653312Z 2025-03-17T18:45:23.0653744Z For compatibility with Python 3.0, please consider all arguments to be 2025-03-17T18:45:23.0653918Z keyword-only. 2025-03-17T18:45:23.0654073Z 2025-03-17T18:45:23.0654244Z Parameters 2025-03-17T18:45:23.0654400Z ---------- 2025-03-17T18:45:23.0654594Z record : bool, optional 2025-03-17T18:45:23.0654929Z Specifies whether warnings should be captured by a custom 2025-03-17T18:45:23.0655380Z implementation of ``warnings.showwarning()`` and be appended to a list 2025-03-17T18:45:23.0655759Z returned by the context manager. Otherwise None is returned by the 2025-03-17T18:45:23.0656187Z context manager. The objects appended to the list are arguments whose 2025-03-17T18:45:23.0656526Z attributes mirror the arguments to ``showwarning()``. 2025-03-17T18:45:23.0656727Z modules : sequence, optional 2025-03-17T18:45:23.0657138Z Sequence of modules for which to reset warnings registry on entry and 2025-03-17T18:45:23.0657499Z restore on exit. To work correctly, all 'ignore' filters should 2025-03-17T18:45:23.0657704Z filter by one of these modules. 2025-03-17T18:45:23.0657865Z 2025-03-17T18:45:23.0658022Z Examples 2025-03-17T18:45:23.0658185Z -------- 2025-03-17T18:45:23.0658357Z >>> import warnings 2025-03-17T18:45:23.0658702Z >>> with np.testing.clear_and_catch_warnings( # doctest: +SKIP 2025-03-17T18:45:23.0658981Z ... modules=[np.core.fromnumeric]): 2025-03-17T18:45:23.0659208Z ... warnings.simplefilter('always') 2025-03-17T18:45:23.0659609Z ... warnings.filterwarnings('ignore', module='np.core.fromnumeric') 2025-03-17T18:45:23.0659933Z ... # do something that raises a warning but ignore those in 2025-03-17T18:45:23.0660126Z ... # np.core.fromnumeric 2025-03-17T18:45:23.0660291Z 2025-03-17T18:45:23.0660748Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0660908Z 2025-03-17T18:45:23.0661084Z warnings.warn(msg) 2025-03-17T18:45:23.0661242Z 2025-03-17T18:45:23.0661579Z --- Parse Warning: 28 / 116 --- 2025-03-17T18:45:23.0663318Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Conv1d in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/conv.py line=354. 2025-03-17T18:45:23.0663806Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0664201Z Applies a 1D convolution over a quantized input signal composed of 2025-03-17T18:45:23.0664405Z several quantized input planes. 2025-03-17T18:45:23.0664570Z 2025-03-17T18:45:23.0664952Z For details on input arguments, parameters, and implementation see 2025-03-17T18:45:23.0665154Z :class:`~torch.nn.Conv1d`. 2025-03-17T18:45:23.0665296Z 2025-03-17T18:45:23.0665462Z .. note:: 2025-03-17T18:45:23.0665826Z Only `zeros` is supported for the :attr:`padding_mode` argument. 2025-03-17T18:45:23.0666039Z 2025-03-17T18:45:23.0666205Z .. note:: 2025-03-17T18:45:23.0666603Z Only `torch.quint8` is supported for the input data type. 2025-03-17T18:45:23.0666767Z 2025-03-17T18:45:23.0666914Z 2025-03-17T18:45:23.0667086Z Attributes: 2025-03-17T18:45:23.0667473Z weight (Tensor): packed tensor derived from the learnable weight 2025-03-17T18:45:23.0667665Z parameter. 2025-03-17T18:45:23.0667925Z scale (Tensor): scalar for the output scale 2025-03-17T18:45:23.0668233Z zero_point (Tensor): scalar for the output zero point 2025-03-17T18:45:23.0668385Z 2025-03-17T18:45:23.0668669Z See :class:`~torch.nn.Conv1d` for other attributes. 2025-03-17T18:45:23.0668820Z 2025-03-17T18:45:23.0668999Z Examples:: 2025-03-17T18:45:23.0669148Z 2025-03-17T18:45:23.0669421Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_QENGINE) 2025-03-17T18:45:23.0669659Z >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2) 2025-03-17T18:45:23.0669876Z >>> input = torch.randn(20, 16, 100) 2025-03-17T18:45:23.0670071Z >>> # quantize input to quint8 2025-03-17T18:45:23.0670262Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0670644Z >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, 2025-03-17T18:45:23.0670882Z ... dtype=torch.quint8) 2025-03-17T18:45:23.0671061Z >>> output = m(q_input) 2025-03-17T18:45:23.0671224Z 2025-03-17T18:45:23.0671373Z 2025-03-17T18:45:23.0671888Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0672035Z 2025-03-17T18:45:23.0672226Z warnings.warn(msg) 2025-03-17T18:45:23.0672367Z 2025-03-17T18:45:23.0672698Z --- Parse Warning: 29 / 116 --- 2025-03-17T18:45:23.0674402Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=LSTM in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/nn/quantized/modules/rnn.py line=11. 2025-03-17T18:45:23.0674900Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0675127Z A quantized long short-term memory (LSTM). 2025-03-17T18:45:23.0675333Z 2025-03-17T18:45:23.0675845Z For the description and the argument types, please, refer to :class:`~torch.nn.LSTM` 2025-03-17T18:45:23.0676007Z 2025-03-17T18:45:23.0676168Z Attributes: 2025-03-17T18:45:23.0676387Z layers : instances of the `_LSTMLayer` 2025-03-17T18:45:23.0676552Z 2025-03-17T18:45:23.0676704Z .. note:: 2025-03-17T18:45:23.0677103Z To access the weights and biases, you need to access them per layer. 2025-03-17T18:45:23.0677416Z See examples in :class:`~torch.ao.nn.quantizable.LSTM` 2025-03-17T18:45:23.0677574Z 2025-03-17T18:45:23.0677737Z Examples:: 2025-03-17T18:45:23.0677937Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0678133Z >>> custom_module_config = { 2025-03-17T18:45:23.0678390Z ... 'float_to_observed_custom_module_class': { 2025-03-17T18:45:23.0678609Z ... nn.LSTM: nn.quantizable.LSTM, 2025-03-17T18:45:23.0678774Z ... }, 2025-03-17T18:45:23.0679038Z ... 'observed_to_quantized_custom_module_class': { 2025-03-17T18:45:23.0679307Z ... nn.quantizable.LSTM: nn.quantized.LSTM, 2025-03-17T18:45:23.0679454Z ... } 2025-03-17T18:45:23.0679626Z ... } 2025-03-17T18:45:23.0680023Z >>> tq.prepare(model, prepare_custom_module_class=custom_module_config) 2025-03-17T18:45:23.0680426Z >>> tq.convert(model, convert_custom_module_class=custom_module_config) 2025-03-17T18:45:23.0680579Z 2025-03-17T18:45:23.0681047Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0681264Z 2025-03-17T18:45:23.0681447Z warnings.warn(msg) 2025-03-17T18:45:23.0681595Z 2025-03-17T18:45:23.0681931Z --- Parse Warning: 30 / 116 --- 2025-03-17T18:45:23.0683941Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=BaseSparsifier.squash_mask in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/pruning/sparsifier/base_sparsifier.py line=227. 2025-03-17T18:45:23.0684440Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0684755Z Squashes the sparse masks into the appropriate tensors. 2025-03-17T18:45:23.0684917Z 2025-03-17T18:45:23.0685292Z If either the `params_to_keep` or `params_to_keep_per_layer` is set, 2025-03-17T18:45:23.0685633Z the module will have a `sparse_params` dict attached to it. 2025-03-17T18:45:23.0685777Z 2025-03-17T18:45:23.0685937Z Args: 2025-03-17T18:45:23.0686279Z params_to_keep: List of keys to save in the module or a dict 2025-03-17T18:45:23.0686575Z representing the modules and keys that will have 2025-03-17T18:45:23.0686791Z sparsity parameters saved 2025-03-17T18:45:23.0687188Z params_to_keep_per_layer: Dict to specify the params that should be 2025-03-17T18:45:23.0687459Z saved for specific layers. The keys in the dict 2025-03-17T18:45:23.0687748Z should be the module fqn, while the values should 2025-03-17T18:45:23.0688064Z be a list of strings with the names of the variables 2025-03-17T18:45:23.0688290Z to save in the `sparse_params` 2025-03-17T18:45:23.0688437Z 2025-03-17T18:45:23.0688636Z Examples: 2025-03-17T18:45:23.0688875Z >>> # xdoctest: +SKIP("locals are undefined") 2025-03-17T18:45:23.0689100Z >>> # Don't save any sparse params 2025-03-17T18:45:23.0689308Z >>> sparsifier.squash_mask() 2025-03-17T18:45:23.0689563Z >>> hasattr(model.submodule1, 'sparse_params') 2025-03-17T18:45:23.0689721Z False 2025-03-17T18:45:23.0689911Z 2025-03-17T18:45:23.0690123Z >>> # Keep sparse params per layer 2025-03-17T18:45:23.0690334Z >>> sparsifier.squash_mask( 2025-03-17T18:45:23.0690533Z ... params_to_keep_per_layer={ 2025-03-17T18:45:23.0690776Z ... 'submodule1.linear1': ('foo', 'bar'), 2025-03-17T18:45:23.0691004Z ... 'submodule2.linear42': ('baz',) 2025-03-17T18:45:23.0691172Z ... }) 2025-03-17T18:45:23.0691450Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:23.0691640Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:23.0691925Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:23.0692105Z {'baz': 0.1} 2025-03-17T18:45:23.0692255Z 2025-03-17T18:45:23.0692476Z >>> # Keep sparse params for all layers 2025-03-17T18:45:23.0692799Z >>> sparsifier.squash_mask(params_to_keep=('foo', 'bar')) 2025-03-17T18:45:23.0693085Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:23.0693272Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:23.0693552Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:23.0693734Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:23.0693884Z 2025-03-17T18:45:23.0694248Z >>> # Keep some sparse params for all layers, and specific ones for 2025-03-17T18:45:23.0694433Z >>> # some other layers 2025-03-17T18:45:23.0694647Z >>> sparsifier.squash_mask( 2025-03-17T18:45:23.0694856Z ... params_to_keep=('foo', 'bar'), 2025-03-17T18:45:23.0695129Z ... params_to_keep_per_layer={ 2025-03-17T18:45:23.0695354Z ... 'submodule2.linear42': ('baz',) 2025-03-17T18:45:23.0695523Z ... }) 2025-03-17T18:45:23.0695798Z >>> print(model.submodule1.linear1.sparse_params) 2025-03-17T18:45:23.0695994Z {'foo': 42, 'bar': 24} 2025-03-17T18:45:23.0696272Z >>> print(model.submodule2.linear42.sparse_params) 2025-03-17T18:45:23.0696481Z {'foo': 42, 'bar': 24, 'baz': 0.1} 2025-03-17T18:45:23.0696631Z 2025-03-17T18:45:23.0697108Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0697258Z 2025-03-17T18:45:23.0697444Z warnings.warn(msg) 2025-03-17T18:45:23.0697601Z 2025-03-17T18:45:23.0697936Z --- Parse Warning: 31 / 116 --- 2025-03-17T18:45:23.0699891Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DTypeConfig in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/backend_config/backend_config.py line=181. 2025-03-17T18:45:23.0700380Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0700529Z 2025-03-17T18:45:23.0701006Z Config object that specifies the supported data types passed as arguments to 2025-03-17T18:45:23.0701447Z quantize ops in the reference model spec, for input and output activations, 2025-03-17T18:45:23.0701623Z weights, and biases. 2025-03-17T18:45:23.0701767Z 2025-03-17T18:45:23.0702098Z For example, consider the following reference model: 2025-03-17T18:45:23.0702245Z 2025-03-17T18:45:23.0702531Z quant1 - [dequant1 - fp32_linear - quant2] - dequant2 2025-03-17T18:45:23.0702679Z 2025-03-17T18:45:23.0703074Z The pattern in the square brackets refers to the reference pattern of 2025-03-17T18:45:23.0703518Z statically quantized linear. Setting the input dtype as `torch.quint8` 2025-03-17T18:45:23.0703944Z in the DTypeConfig means we pass in `torch.quint8` as the dtype argument 2025-03-17T18:45:23.0704355Z to the first quantize op (quant1). Similarly, setting the output dtype as 2025-03-17T18:45:23.0704761Z `torch.quint8` means we pass in `torch.quint8` as the dtype argument to 2025-03-17T18:45:23.0704991Z the second quantize op (quant2). 2025-03-17T18:45:23.0705144Z 2025-03-17T18:45:23.0705532Z Note that the dtype here does not refer to the interface dtypes of the 2025-03-17T18:45:23.0705913Z op. For example, the "input dtype" here is not the dtype of the input 2025-03-17T18:45:23.0706303Z tensor passed to the quantized linear op. Though it can still be the 2025-03-17T18:45:23.0706737Z same as the interface dtype, this is not always the case, e.g. the 2025-03-17T18:45:23.0707137Z interface dtype is fp32 in dynamic quantization but the "input dtype" 2025-03-17T18:45:23.0707536Z specified in the DTypeConfig would still be quint8. The semantics of 2025-03-17T18:45:23.0707916Z dtypes here are the same as the semantics of the dtypes specified in 2025-03-17T18:45:23.0708084Z the observers. 2025-03-17T18:45:23.0708234Z 2025-03-17T18:45:23.0708610Z These dtypes are matched against the ones specified in the user's 2025-03-17T18:45:23.0709007Z QConfig. If there is a match, and the QConfig satisfies the constraints 2025-03-17T18:45:23.0709411Z specified in the DTypeConfig (if any), then we will quantize the given 2025-03-17T18:45:23.0709825Z pattern using this DTypeConfig. Otherwise, the QConfig is ignored and 2025-03-17T18:45:23.0710034Z the pattern will not be quantized. 2025-03-17T18:45:23.0710178Z 2025-03-17T18:45:23.0710354Z Example usage:: 2025-03-17T18:45:23.0710498Z 2025-03-17T18:45:23.0710687Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:23.0710886Z >>> dtype_config1 = DTypeConfig( 2025-03-17T18:45:23.0711084Z ... input_dtype=torch.quint8, 2025-03-17T18:45:23.0711347Z ... output_dtype=torch.quint8, 2025-03-17T18:45:23.0711538Z ... weight_dtype=torch.qint8, 2025-03-17T18:45:23.0711735Z ... bias_dtype=torch.float) 2025-03-17T18:45:23.0711875Z 2025-03-17T18:45:23.0712078Z >>> dtype_config2 = DTypeConfig( 2025-03-17T18:45:23.0712306Z ... input_dtype=DTypeWithConstraints( 2025-03-17T18:45:23.0712494Z ... dtype=torch.quint8, 2025-03-17T18:45:23.0712684Z ... quant_min_lower_bound=0, 2025-03-17T18:45:23.0712895Z ... quant_max_upper_bound=255, 2025-03-17T18:45:23.0713045Z ... ), 2025-03-17T18:45:23.0713285Z ... output_dtype=DTypeWithConstraints( 2025-03-17T18:45:23.0713461Z ... dtype=torch.quint8, 2025-03-17T18:45:23.0713663Z ... quant_min_lower_bound=0, 2025-03-17T18:45:23.0713855Z ... quant_max_upper_bound=255, 2025-03-17T18:45:23.0714015Z ... ), 2025-03-17T18:45:23.0714241Z ... weight_dtype=DTypeWithConstraints( 2025-03-17T18:45:23.0714434Z ... dtype=torch.qint8, 2025-03-17T18:45:23.0714635Z ... quant_min_lower_bound=-128, 2025-03-17T18:45:23.0714839Z ... quant_max_upper_bound=127, 2025-03-17T18:45:23.0714991Z ... ), 2025-03-17T18:45:23.0715188Z ... bias_dtype=torch.float) 2025-03-17T18:45:23.0715335Z 2025-03-17T18:45:23.0715536Z >>> dtype_config1.input_dtype 2025-03-17T18:45:23.0715698Z torch.quint8 2025-03-17T18:45:23.0715853Z 2025-03-17T18:45:23.0716093Z >>> dtype_config2.input_dtype 2025-03-17T18:45:23.0716260Z torch.quint8 2025-03-17T18:45:23.0716403Z 2025-03-17T18:45:23.0716650Z >>> dtype_config2.input_dtype_with_constraints 2025-03-17T18:45:23.0717666Z DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None) 2025-03-17T18:45:23.0717833Z 2025-03-17T18:45:23.0718301Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0718451Z 2025-03-17T18:45:23.0718625Z warnings.warn(msg) 2025-03-17T18:45:23.0718766Z 2025-03-17T18:45:23.0719114Z --- Parse Warning: 32 / 116 --- 2025-03-17T18:45:23.0721528Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ModelReportVisualizer.generate_filtered_tables in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=301. 2025-03-17T18:45:23.0722026Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0722182Z 2025-03-17T18:45:23.0722670Z Takes in optional filter values and generates two tables with desired information. 2025-03-17T18:45:23.0722826Z 2025-03-17T18:45:23.0723215Z The generated tables are presented in both a list-of-lists format 2025-03-17T18:45:23.0723358Z 2025-03-17T18:45:23.0723749Z The reason for the two tables are that they handle different things: 2025-03-17T18:45:23.0724036Z 1.) the first table handles all tensor level information 2025-03-17T18:45:23.0724442Z 2.) the second table handles and displays all channel based information 2025-03-17T18:45:23.0724589Z 2025-03-17T18:45:23.0725197Z The reasoning for this is that having all the info in one table can make it ambiguous which collected 2025-03-17T18:45:23.0725806Z statistics are global, and which are actually per-channel, so it's better to split it up into two 2025-03-17T18:45:23.0726483Z tables. This also makes the information much easier to digest given the plethora of statistics collected 2025-03-17T18:45:23.0726631Z 2025-03-17T18:45:23.0726816Z Tensor table columns: 2025-03-17T18:45:23.0727156Z idx layer_fqn feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:23.0727489Z ---- --------- --------- --------- --------- --------- 2025-03-17T18:45:23.0727645Z 2025-03-17T18:45:23.0727844Z Per-Channel table columns: 2025-03-17T18:45:23.0728243Z idx layer_fqn channel feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:23.0728549Z ---- --------- ------- --------- --------- --------- --------- 2025-03-17T18:45:23.0728691Z 2025-03-17T18:45:23.0728850Z Args: 2025-03-17T18:45:23.0729332Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:23.0729542Z contain this filter substring 2025-03-17T18:45:23.0729832Z Default = "", results in all the features being printed 2025-03-17T18:45:23.0730318Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:23.0730771Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:23.0730933Z 2025-03-17T18:45:23.0731138Z Returns a dictionary with two keys: 2025-03-17T18:45:23.0731461Z (Dict[str, Tuple[List, List]]) A dict containing two keys: 2025-03-17T18:45:23.0731680Z "tensor_level_info", "channel_level_info" 2025-03-17T18:45:23.0731887Z Each key maps to a tuple with: 2025-03-17T18:45:23.0732098Z A list of the headers of each table 2025-03-17T18:45:23.0732437Z A list of lists containing the table information row by row 2025-03-17T18:45:23.0732747Z The 0th index row will contain the headers of the columns 2025-03-17T18:45:23.0733008Z The rest of the rows will contain data 2025-03-17T18:45:23.0733154Z 2025-03-17T18:45:23.0733318Z Example Use: 2025-03-17T18:45:23.0733545Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.0733815Z >>> mod_report_visualizer.generate_filtered_tables( 2025-03-17T18:45:23.0734037Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:23.0734235Z ... module_fqn_filter = "block1" 2025-03-17T18:45:23.0734736Z ... ) # generates table with per_channel_min info for all modules in block 1 of the model 2025-03-17T18:45:23.0734893Z 2025-03-17T18:45:23.0735357Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0735543Z 2025-03-17T18:45:23.0735715Z warnings.warn(msg) 2025-03-17T18:45:23.0735869Z 2025-03-17T18:45:23.0736193Z --- Parse Warning: 33 / 116 --- 2025-03-17T18:45:23.0738757Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ModelReportVisualizer.generate_table_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=400. 2025-03-17T18:45:23.0739245Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0739403Z 2025-03-17T18:45:23.0739904Z Takes in optional filter values and prints out formatted tables of the information. 2025-03-17T18:45:23.0740062Z 2025-03-17T18:45:23.0740702Z The reason for the two tables printed out instead of one large one are that they handle different things: 2025-03-17T18:45:23.0741001Z 1.) the first table handles all tensor level information 2025-03-17T18:45:23.0741395Z 2.) the second table handles and displays all channel based information 2025-03-17T18:45:23.0741544Z 2025-03-17T18:45:23.0742140Z The reasoning for this is that having all the info in one table can make it ambiguous which collected 2025-03-17T18:45:23.0742759Z statistics are global, and which are actually per-channel, so it's better to split it up into two 2025-03-17T18:45:23.0743418Z tables. This also makes the information much easier to digest given the plethora of statistics collected 2025-03-17T18:45:23.0743574Z 2025-03-17T18:45:23.0743854Z Tensor table columns: 2025-03-17T18:45:23.0744206Z idx layer_fqn feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:23.0744465Z ---- --------- --------- --------- --------- --------- 2025-03-17T18:45:23.0744620Z 2025-03-17T18:45:23.0744808Z Per-Channel table columns: 2025-03-17T18:45:23.0744966Z 2025-03-17T18:45:23.0745365Z idx layer_fqn channel feature_1 feature_2 feature_3 .... feature_n 2025-03-17T18:45:23.0745656Z ---- --------- ------- --------- --------- --------- --------- 2025-03-17T18:45:23.0745799Z 2025-03-17T18:45:23.0745962Z Args: 2025-03-17T18:45:23.0746504Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:23.0746710Z contain this filter substring 2025-03-17T18:45:23.0747003Z Default = "", results in all the features being printed 2025-03-17T18:45:23.0747492Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:23.0747944Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:23.0748091Z 2025-03-17T18:45:23.0748251Z Example Use: 2025-03-17T18:45:23.0748487Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.0748781Z >>> mod_report_visualizer.generate_table_visualization( 2025-03-17T18:45:23.0749011Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:23.0749199Z ... module_fqn_filter = "block1" 2025-03-17T18:45:23.0749350Z ... ) 2025-03-17T18:45:23.0749753Z >>> # prints out neatly formatted table with per_channel_min info 2025-03-17T18:45:23.0749973Z >>> # for all modules in block 1 of the model 2025-03-17T18:45:23.0750127Z 2025-03-17T18:45:23.0750587Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0750740Z 2025-03-17T18:45:23.0750912Z warnings.warn(msg) 2025-03-17T18:45:23.0751074Z 2025-03-17T18:45:23.0751410Z --- Parse Warning: 34 / 116 --- 2025-03-17T18:45:23.0753833Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ModelReportVisualizer.generate_plot_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=566. 2025-03-17T18:45:23.0754367Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0754521Z 2025-03-17T18:45:23.0754968Z Takes in a feature and optional module_filter and plots of the desired data. 2025-03-17T18:45:23.0755123Z 2025-03-17T18:45:23.0755630Z For per channel features, it averages the value across the channels and plots a point 2025-03-17T18:45:23.0756118Z per module. The reason for this is that for models with hundreds of channels, it can 2025-03-17T18:45:23.0756633Z be hard to differentiate one channel line from another, and so the point of generating 2025-03-17T18:45:23.0757137Z a single average point per module is to give a sense of general trends that encourage 2025-03-17T18:45:23.0757311Z further deep dives. 2025-03-17T18:45:23.0757469Z 2025-03-17T18:45:23.0757626Z Note: 2025-03-17T18:45:23.0758123Z Only features in the report that have tensor value data are plottable by this class 2025-03-17T18:45:23.0758418Z When the tensor information is plotted, it will plot: 2025-03-17T18:45:23.0758668Z idx as the x val, feature value as the y_val 2025-03-17T18:45:23.0758965Z When the channel information is plotted, it will plot: 2025-03-17T18:45:23.0759462Z the first idx of each module as the x val, feature value as the y_val [for each channel] 2025-03-17T18:45:23.0759882Z The reason for this is that we want to be able to compare values across the 2025-03-17T18:45:23.0760370Z channels for same layer, and it will be hard if values are staggered by idx 2025-03-17T18:45:23.0760675Z This means each module is represented by only 1 x value 2025-03-17T18:45:23.0760827Z Args: 2025-03-17T18:45:23.0761243Z feature_filter (str): Filters the features presented to only those that 2025-03-17T18:45:23.0761455Z contain this filter substring 2025-03-17T18:45:23.0761936Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:23.0762393Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:23.0762537Z 2025-03-17T18:45:23.0762713Z Example Use: 2025-03-17T18:45:23.0762935Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.0763227Z >>> mod_report_visualizer.generate_plot_visualization( 2025-03-17T18:45:23.0763444Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:23.0763643Z ... module_fqn_filter = "block1" 2025-03-17T18:45:23.0763798Z ... ) 2025-03-17T18:45:23.0764129Z >>> # outputs line plot of per_channel_min information for all 2025-03-17T18:45:23.0764460Z >>> # modules in block1 of model each channel gets it's own line, 2025-03-17T18:45:23.0764776Z >>> # and it's plotted across the in-order modules on the x-axis 2025-03-17T18:45:23.0764924Z 2025-03-17T18:45:23.0765398Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0765544Z 2025-03-17T18:45:23.0765727Z warnings.warn(msg) 2025-03-17T18:45:23.0765874Z 2025-03-17T18:45:23.0766230Z --- Parse Warning: 35 / 116 --- 2025-03-17T18:45:23.0768678Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ModelReportVisualizer.generate_histogram_visualization in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/ao/quantization/fx/_model_report/model_report_visualizer.py line=646. 2025-03-17T18:45:23.0769179Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0769328Z 2025-03-17T18:45:23.0769848Z Takes in a feature and optional module_filter and plots the histogram of desired data. 2025-03-17T18:45:23.0769995Z 2025-03-17T18:45:23.0770183Z Note: 2025-03-17T18:45:23.0770680Z Only features in the report that have tensor value data can be viewed as a histogram 2025-03-17T18:45:23.0771170Z If you want to plot a histogram from all the channel values of a specific feature for 2025-03-17T18:45:23.0771623Z a specific model, make sure to specify both the model and the feature properly 2025-03-17T18:45:23.0772081Z in the filters and you should be able to see a distribution of the channel data 2025-03-17T18:45:23.0772224Z 2025-03-17T18:45:23.0772375Z Args: 2025-03-17T18:45:23.0772857Z feature_filter (str, optional): Filters the features presented to only those that 2025-03-17T18:45:23.0773060Z contain this filter substring 2025-03-17T18:45:23.0773346Z Default = "", results in all the features being printed 2025-03-17T18:45:23.0773823Z module_fqn_filter (str, optional): Only includes modules that contains this string 2025-03-17T18:45:23.0774271Z Default = "", results in all the modules in the reports to be visible in the table 2025-03-17T18:45:23.0774680Z num_bins (int, optional): The number of bins to create the histogram with 2025-03-17T18:45:23.0775010Z Default = 10, the values will be split into 10 equal sized bins 2025-03-17T18:45:23.0775166Z 2025-03-17T18:45:23.0775330Z Example Use: 2025-03-17T18:45:23.0775508Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0776065Z >>> mod_report_visualizer.generategenerate_histogram_visualization_plot_visualization( 2025-03-17T18:45:23.0776288Z ... feature_filter = "per_channel_min", 2025-03-17T18:45:23.0776479Z ... module_fqn_filter = "block1" 2025-03-17T18:45:23.0776702Z ... ) 2025-03-17T18:45:23.0777210Z # outputs histogram of per_channel_min information for all modules in block1 of model 2025-03-17T18:45:23.0777691Z information is gathered across all channels for all modules in block 1 for the 2025-03-17T18:45:23.0778094Z per_channel_min and is displayed in a histogram of equally sized bins 2025-03-17T18:45:23.0778249Z 2025-03-17T18:45:23.0778715Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0778869Z 2025-03-17T18:45:23.0779039Z warnings.warn(msg) 2025-03-17T18:45:23.0779201Z 2025-03-17T18:45:23.0779494Z --- Parse Warning: 36 / 116 --- 2025-03-17T18:45:23.0780522Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DeviceMesh.__getitem__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/device_mesh.py line=666. 2025-03-17T18:45:23.0780782Z Caused by: DoctestParseError('Failed to parse doctest in _package_groups') 2025-03-17T18:45:23.0780879Z 2025-03-17T18:45:23.0781155Z Slice the current DeviceMesh based on the mesh_dim_names given to create a submesh. 2025-03-17T18:45:23.0781439Z The submesh created consists of the dimensions and the communicators indicated by 2025-03-17T18:45:23.0781538Z ``mesh_dim_names`` 2025-03-17T18:45:23.0781638Z 2025-03-17T18:45:23.0781724Z Args: 2025-03-17T18:45:23.0781975Z mesh_dim_names (Union[str, Tuple[str]]): the name or the tuple of names of the 2025-03-17T18:45:23.0782203Z mesh dimension of the DeviceMesh to create the submesh for. 2025-03-17T18:45:23.0782302Z Returns: 2025-03-17T18:45:23.0782410Z A :class:`DeviceMesh` object 2025-03-17T18:45:23.0782493Z 2025-03-17T18:45:23.0782790Z The following program runs on each process/rank in an SPMD manner in a world size of 8. 2025-03-17T18:45:23.0782892Z In the first example: 2025-03-17T18:45:23.0783161Z Calling mesh_2d["tp"] on rank 0, 1, 2, 3 returns a 1D submesh of DeviceMesh:([0, 1, 2, 3]). 2025-03-17T18:45:23.0783417Z Calling mesh_2d["tp"] on rank 4, 5, 6, 7 returns a 1D submesh of DeviceMesh:([4, 5, 6, 7]). 2025-03-17T18:45:23.0783656Z Calling mesh_2d["dp"] on rank 0, 4 returns a 1D submesh of DeviceMesh:([0, 4]). 2025-03-17T18:45:23.0783913Z Calling mesh_2d["dp"] on rank 1, 5 returns a 1D submesh of DeviceMesh:([1, 5]). 2025-03-17T18:45:23.0784151Z Calling mesh_2d["dp"] on rank 2, 6 returns a 1D submesh of DeviceMesh:([2, 6]). 2025-03-17T18:45:23.0784391Z Calling mesh_2d["dp"] on rank 3, 7 returns a 1D submesh of DeviceMesh:([3, 7]). 2025-03-17T18:45:23.0784472Z 2025-03-17T18:45:23.0784575Z In the second example: 2025-03-17T18:45:23.0784855Z Calling mesh_3d["dp", "cp"] on rank 0, 1, 4, 5 returns a 2D submesh of DeviceMesh:([[0, 1], [4, 5]]). 2025-03-17T18:45:23.0785129Z Calling mesh_3d["dp", "cp"] on rank 2, 3, 6, 7 returns a 2D submesh of DeviceMesh:([[2, 3], [6, 7]]). 2025-03-17T18:45:23.0785406Z Calling mesh_3d["cp", "dp"] on rank 0, 1, 4, 5 returns a 2D submesh of DeviceMesh:([[0, 4], [1, 5]]). 2025-03-17T18:45:23.0785682Z Calling mesh_3d["cp", "dp"] on rank 2, 3, 6, 7 returns a 2D submesh of DeviceMesh:([[2, 6], [3, 7]]). 2025-03-17T18:45:23.0785766Z 2025-03-17T18:45:23.0785871Z Example:: 2025-03-17T18:45:23.0785983Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:23.0786163Z >>> from torch.distributed.device_mesh import DeviceMesh 2025-03-17T18:45:23.0786258Z >>> 2025-03-17T18:45:23.0786541Z >>> # Initialize a 2D device mesh as (2, 4) to represent the topology 2025-03-17T18:45:23.0786698Z >>> # of cross-host(dim 0), and within-host (dim 1). 2025-03-17T18:45:23.0786957Z >>> mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp")) 2025-03-17T18:45:23.0787074Z >>> tp_mesh = mesh_2d["tp"] 2025-03-17T18:45:23.0787230Z >>> dp_mesh = mesh_2d["dp"] 2025-03-17T18:45:23.0787329Z >>> 2025-03-17T18:45:23.0787432Z >>> # Initialize a 3D mesh. 2025-03-17T18:45:23.0787728Z >>> mesh_3d = init_device_mesh(device_type="cuda", (2,2,2), mesh_dim_names=("dp", "pp", "cp")) 2025-03-17T18:45:23.0788040Z >>> # The order of the mesh_dim_names provided deteremines the order of dimensions in the submesh. 2025-03-17T18:45:23.0788168Z >>> dp_cp_mesh = mesh_3d["dp", "cp"] 2025-03-17T18:45:23.0788282Z >>> cp_dp_mesh = mesh_3d["cp", "dp"] 2025-03-17T18:45:23.0788380Z 2025-03-17T18:45:23.0789059Z Original Error: SyntaxError('positional argument follows keyword argument', ('', 6, 82, 'mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp"))\n', 6, 83)) 2025-03-17T18:45:23.0789157Z 2025-03-17T18:45:23.0789407Z mesh_2d = init_device_mesh(device_type="cuda", (2,4), mesh_dim_names=("dp", "tp")) 2025-03-17T18:45:23.0789543Z ^ 2025-03-17T18:45:23.0789644Z warnings.warn(msg) 2025-03-17T18:45:23.0789740Z 2025-03-17T18:45:23.0789939Z --- Parse Warning: 37 / 116 --- 2025-03-17T18:45:23.0790931Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=batch_isend_irecv in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=2604. 2025-03-17T18:45:23.0791198Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0791326Z 2025-03-17T18:45:23.0791580Z Send or Receive a batch of tensors asynchronously and return a list of requests. 2025-03-17T18:45:23.0791674Z 2025-03-17T18:45:23.0791923Z Process each of the operations in ``p2p_op_list`` and return the corresponding 2025-03-17T18:45:23.0792136Z requests. NCCL, Gloo, and UCC backend are currently supported. 2025-03-17T18:45:23.0792222Z 2025-03-17T18:45:23.0792322Z Args: 2025-03-17T18:45:23.0792550Z p2p_op_list: A list of point-to-point operations(type of each operator is 2025-03-17T18:45:23.0792798Z ``torch.distributed.P2POp``). The order of the isend/irecv in the list 2025-03-17T18:45:23.0793037Z matters and it needs to match with corresponding isend/irecv on the 2025-03-17T18:45:23.0793145Z remote end. 2025-03-17T18:45:23.0793228Z 2025-03-17T18:45:23.0793329Z Returns: 2025-03-17T18:45:23.0793575Z A list of distributed request objects returned by calling the corresponding 2025-03-17T18:45:23.0793692Z op in the op_list. 2025-03-17T18:45:23.0793773Z 2025-03-17T18:45:23.0793875Z Examples: 2025-03-17T18:45:23.0793986Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:23.0794194Z >>> send_tensor = torch.arange(2, dtype=torch.float32) + 2 * rank 2025-03-17T18:45:23.0794351Z >>> recv_tensor = torch.randn(2, dtype=torch.float32) 2025-03-17T18:45:23.0794577Z >>> send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1) % world_size) 2025-03-17T18:45:23.0794683Z >>> recv_op = dist.P2POp( 2025-03-17T18:45:23.0794884Z ... dist.irecv, recv_tensor, (rank - 1 + world_size) % world_size 2025-03-17T18:45:23.0794972Z ... ) 2025-03-17T18:45:23.0795122Z >>> reqs = batch_isend_irecv([send_op, recv_op]) 2025-03-17T18:45:23.0795222Z >>> for req in reqs: 2025-03-17T18:45:23.0795327Z >>> req.wait() 2025-03-17T18:45:23.0795421Z >>> recv_tensor 2025-03-17T18:45:23.0795532Z tensor([2, 3]) # Rank 0 2025-03-17T18:45:23.0795629Z tensor([0, 1]) # Rank 1 2025-03-17T18:45:23.0795722Z 2025-03-17T18:45:23.0795969Z .. note:: Note that when this API is used with the NCCL PG backend, users must set 2025-03-17T18:45:23.0796207Z the current GPU device with `torch.cuda.set_device`, otherwise it will 2025-03-17T18:45:23.0796369Z lead to unexpected hang issues. 2025-03-17T18:45:23.0796463Z 2025-03-17T18:45:23.0796678Z In addition, if this API is the first collective call in the ``group`` 2025-03-17T18:45:23.0796911Z passed to ``dist.P2POp``, all ranks of the ``group`` must participate in 2025-03-17T18:45:23.0797146Z this API call; otherwise, the behavior is undefined. If this API call is 2025-03-17T18:45:23.0797379Z not the first collective call in the ``group``, batched P2P operations 2025-03-17T18:45:23.0797579Z involving only a subset of ranks of the ``group`` are allowed. 2025-03-17T18:45:23.0797670Z 2025-03-17T18:45:23.0797925Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0798021Z 2025-03-17T18:45:23.0798123Z warnings.warn(msg) 2025-03-17T18:45:23.0798208Z 2025-03-17T18:45:23.0798411Z --- Parse Warning: 38 / 116 --- 2025-03-17T18:45:23.0799366Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=all_reduce in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=2734. 2025-03-17T18:45:23.0799648Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0799746Z 2025-03-17T18:45:23.0800022Z Reduces the tensor data across all machines in a way that all get the final result. 2025-03-17T18:45:23.0800109Z 2025-03-17T18:45:23.0800354Z After the call ``tensor`` is going to be bitwise identical in all processes. 2025-03-17T18:45:23.0800466Z 2025-03-17T18:45:23.0800593Z Complex tensors are supported. 2025-03-17T18:45:23.0800679Z 2025-03-17T18:45:23.0800782Z Args: 2025-03-17T18:45:23.0800993Z tensor (Tensor): Input and output of the collective. The function 2025-03-17T18:45:23.0801116Z operates in-place. 2025-03-17T18:45:23.0801239Z op (optional): One of the values from 2025-03-17T18:45:23.0801383Z ``torch.distributed.ReduceOp`` 2025-03-17T18:45:23.0801589Z enum. Specifies an operation used for element-wise reductions. 2025-03-17T18:45:23.0801841Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0802000Z the default process group will be used. 2025-03-17T18:45:23.0802212Z async_op (bool, optional): Whether this op should be an async op 2025-03-17T18:45:23.0802298Z 2025-03-17T18:45:23.0802401Z Returns: 2025-03-17T18:45:23.0802547Z Async work handle, if async_op is set to True. 2025-03-17T18:45:23.0802713Z None, if not async_op or if not part of the group 2025-03-17T18:45:23.0802797Z 2025-03-17T18:45:23.0802905Z Examples: 2025-03-17T18:45:23.0803018Z >>> # xdoctest: +SKIP("no rank") 2025-03-17T18:45:23.0803169Z >>> # All tensors below are of torch.int64 type. 2025-03-17T18:45:23.0803291Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0803428Z >>> device = torch.device(f"cuda:{rank}") 2025-03-17T18:45:23.0803658Z >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank 2025-03-17T18:45:23.0803760Z >>> tensor 2025-03-17T18:45:23.0803880Z tensor([1, 2], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0804015Z tensor([3, 4], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0804149Z >>> dist.all_reduce(tensor, op=ReduceOp.SUM) 2025-03-17T18:45:23.0804249Z >>> tensor 2025-03-17T18:45:23.0804363Z tensor([4, 6], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0804492Z tensor([4, 6], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0804576Z 2025-03-17T18:45:23.0804731Z >>> # All tensors below are of torch.cfloat type. 2025-03-17T18:45:23.0804850Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0804976Z >>> tensor = torch.tensor( 2025-03-17T18:45:23.0805171Z ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device 2025-03-17T18:45:23.0805282Z ... ) + 2 * rank * (1 + 1j) 2025-03-17T18:45:23.0805369Z >>> tensor 2025-03-17T18:45:23.0805514Z tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0805661Z tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0805801Z >>> dist.all_reduce(tensor, op=ReduceOp.SUM) 2025-03-17T18:45:23.0805902Z >>> tensor 2025-03-17T18:45:23.0806037Z tensor([4.+4.j, 6.+6.j], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0806184Z tensor([4.+4.j, 6.+6.j], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0806272Z 2025-03-17T18:45:23.0806366Z 2025-03-17T18:45:23.0806629Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0806722Z 2025-03-17T18:45:23.0806825Z warnings.warn(msg) 2025-03-17T18:45:23.0806922Z 2025-03-17T18:45:23.0807111Z --- Parse Warning: 39 / 116 --- 2025-03-17T18:45:23.0808088Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=gather_object in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=3090. 2025-03-17T18:45:23.0808355Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0808453Z 2025-03-17T18:45:23.0808681Z Gathers picklable objects from the whole group in a single process. 2025-03-17T18:45:23.0808778Z 2025-03-17T18:45:23.0809014Z Similar to :func:`gather`, but Python objects can be passed in. Note that the 2025-03-17T18:45:23.0809207Z object must be picklable in order to be gathered. 2025-03-17T18:45:23.0809290Z 2025-03-17T18:45:23.0809387Z Args: 2025-03-17T18:45:23.0809521Z obj (Any): Input object. Must be picklable. 2025-03-17T18:45:23.0809744Z object_gather_list (list[Any]): Output list. On the ``dst`` rank, it 2025-03-17T18:45:23.0809937Z should be correctly sized as the size of the group for this 2025-03-17T18:45:23.0810167Z collective and will contain the output. Must be ``None`` on non-dst 2025-03-17T18:45:23.0810279Z ranks. (default is ``None``) 2025-03-17T18:45:23.0810615Z dst (int, optional): Destination rank on global process group (regardless of ``group`` argument). 2025-03-17T18:45:23.0810839Z (If both ``dst`` and ``group_dst`` are None, default is global rank 0) 2025-03-17T18:45:23.0811084Z group: (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0811279Z the default process group will be used. Default is ``None``. 2025-03-17T18:45:23.0811642Z group_dst (int, optional): Destination rank on ``group``. Invalid to specify both ``dst`` and ``group_dst`` 2025-03-17T18:45:23.0811727Z 2025-03-17T18:45:23.0811826Z Returns: 2025-03-17T18:45:23.0812014Z None. On the ``dst`` rank, ``object_gather_list`` will contain the 2025-03-17T18:45:23.0812134Z output of the collective. 2025-03-17T18:45:23.0812219Z 2025-03-17T18:45:23.0812448Z .. note:: Note that this API differs slightly from the gather collective 2025-03-17T18:45:23.0812671Z since it does not provide an async_op handle and thus will be a blocking 2025-03-17T18:45:23.0812776Z call. 2025-03-17T18:45:23.0812863Z 2025-03-17T18:45:23.0813113Z .. note:: For NCCL-based processed groups, internal tensor representations 2025-03-17T18:45:23.0813333Z of objects must be moved to the GPU device before communication takes 2025-03-17T18:45:23.0813498Z place. In this case, the device used is given by 2025-03-17T18:45:23.0813721Z ``torch.cuda.current_device()`` and it is the user's responsiblity to 2025-03-17T18:45:23.0813945Z ensure that this is set so that each rank has an individual GPU, via 2025-03-17T18:45:23.0814055Z ``torch.cuda.set_device()``. 2025-03-17T18:45:23.0814152Z 2025-03-17T18:45:23.0814245Z .. warning:: 2025-03-17T18:45:23.0814548Z :func:`gather_object` uses ``pickle`` module implicitly, which is 2025-03-17T18:45:23.0814778Z known to be insecure. It is possible to construct malicious pickle data 2025-03-17T18:45:23.0815019Z which will execute arbitrary code during unpickling. Only call this 2025-03-17T18:45:23.0815137Z function with data you trust. 2025-03-17T18:45:23.0815234Z 2025-03-17T18:45:23.0815327Z .. warning:: 2025-03-17T18:45:23.0815558Z Calling :func:`gather_object` with GPU tensors is not well supported 2025-03-17T18:45:23.0815788Z and inefficient as it incurs GPU -> CPU transfer since tensors would be 2025-03-17T18:45:23.0815978Z pickled. Please consider using :func:`gather` instead. 2025-03-17T18:45:23.0816064Z 2025-03-17T18:45:23.0816171Z Example:: 2025-03-17T18:45:23.0816308Z >>> # xdoctest: +SKIP("need process group init") 2025-03-17T18:45:23.0816507Z >>> # Note: Process group initialization omitted on each rank. 2025-03-17T18:45:23.0816629Z >>> import torch.distributed as dist 2025-03-17T18:45:23.0816739Z >>> # Assumes world_size of 3. 2025-03-17T18:45:23.0816929Z >>> gather_objects = ["foo", 12, {1: 2}] # any picklable object 2025-03-17T18:45:23.0817052Z >>> output = [None for _ in gather_objects] 2025-03-17T18:45:23.0817170Z >>> dist.gather_object( 2025-03-17T18:45:23.0817293Z ... gather_objects[dist.get_rank()], 2025-03-17T18:45:23.0817438Z ... output if dist.get_rank() == 0 else None, 2025-03-17T18:45:23.0817531Z ... dst=0 2025-03-17T18:45:23.0817649Z ... ) 2025-03-17T18:45:23.0817745Z >>> # On rank 0 2025-03-17T18:45:23.0817841Z >>> output 2025-03-17T18:45:23.0817934Z ['foo', 12, {1: 2}] 2025-03-17T18:45:23.0818028Z 2025-03-17T18:45:23.0818286Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0818382Z 2025-03-17T18:45:23.0818487Z warnings.warn(msg) 2025-03-17T18:45:23.0818581Z 2025-03-17T18:45:23.0818776Z --- Parse Warning: 40 / 116 --- 2025-03-17T18:45:23.0819737Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=all_gather in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=3666. 2025-03-17T18:45:23.0820032Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0820131Z 2025-03-17T18:45:23.0820276Z Gathers tensors from the whole group in a list. 2025-03-17T18:45:23.0820376Z 2025-03-17T18:45:23.0820524Z Complex and uneven sized tensors are supported. 2025-03-17T18:45:23.0820619Z 2025-03-17T18:45:23.0820704Z Args: 2025-03-17T18:45:23.0820897Z tensor_list (list[Tensor]): Output list. It should contain 2025-03-17T18:45:23.0821120Z correctly-sized tensors to be used for output of the collective. 2025-03-17T18:45:23.0821261Z Uneven sized tensors are supported. 2025-03-17T18:45:23.0821456Z tensor (Tensor): Tensor to be broadcast from current process. 2025-03-17T18:45:23.0821703Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0821833Z the default process group will be used. 2025-03-17T18:45:23.0822044Z async_op (bool, optional): Whether this op should be an async op 2025-03-17T18:45:23.0822130Z 2025-03-17T18:45:23.0822235Z Returns: 2025-03-17T18:45:23.0822380Z Async work handle, if async_op is set to True. 2025-03-17T18:45:23.0822544Z None, if not async_op or if not part of the group 2025-03-17T18:45:23.0822629Z 2025-03-17T18:45:23.0822731Z Examples: 2025-03-17T18:45:23.0822871Z >>> # xdoctest: +SKIP("need process group init") 2025-03-17T18:45:23.0823027Z >>> # All tensors below are of torch.int64 dtype. 2025-03-17T18:45:23.0823195Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0823334Z >>> device = torch.device(f"cuda:{rank}") 2025-03-17T18:45:23.0823434Z >>> tensor_list = [ 2025-03-17T18:45:23.0823648Z ... torch.zeros(2, dtype=torch.int64, device=device) for _ in range(2) 2025-03-17T18:45:23.0823749Z ... ] 2025-03-17T18:45:23.0823846Z >>> tensor_list 2025-03-17T18:45:23.0824058Z [tensor([0, 0], device='cuda:0'), tensor([0, 0], device='cuda:0')] # Rank 0 2025-03-17T18:45:23.0824259Z [tensor([0, 0], device='cuda:1'), tensor([0, 0], device='cuda:1')] # Rank 1 2025-03-17T18:45:23.0824501Z >>> tensor = torch.arange(2, dtype=torch.int64, device=device) + 1 + 2 * rank 2025-03-17T18:45:23.0824592Z >>> tensor 2025-03-17T18:45:23.0824721Z tensor([1, 2], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0824841Z tensor([3, 4], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0824969Z >>> dist.all_gather(tensor_list, tensor) 2025-03-17T18:45:23.0825068Z >>> tensor_list 2025-03-17T18:45:23.0825280Z [tensor([1, 2], device='cuda:0'), tensor([3, 4], device='cuda:0')] # Rank 0 2025-03-17T18:45:23.0825478Z [tensor([1, 2], device='cuda:1'), tensor([3, 4], device='cuda:1')] # Rank 1 2025-03-17T18:45:23.0825570Z 2025-03-17T18:45:23.0825719Z >>> # All tensors below are of torch.cfloat dtype. 2025-03-17T18:45:23.0825852Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0825954Z >>> tensor_list = [ 2025-03-17T18:45:23.0826179Z ... torch.zeros(2, dtype=torch.cfloat, device=device) for _ in range(2) 2025-03-17T18:45:23.0826292Z ... ] 2025-03-17T18:45:23.0826397Z >>> tensor_list 2025-03-17T18:45:23.0826725Z [tensor([0.+0.j, 0.+0.j], device='cuda:0'), tensor([0.+0.j, 0.+0.j], device='cuda:0')] # Rank 0 2025-03-17T18:45:23.0826993Z [tensor([0.+0.j, 0.+0.j], device='cuda:1'), tensor([0.+0.j, 0.+0.j], device='cuda:1')] # Rank 1 2025-03-17T18:45:23.0827110Z >>> tensor = torch.tensor( 2025-03-17T18:45:23.0827276Z ... [1 + 1j, 2 + 2j], dtype=torch.cfloat, device=device 2025-03-17T18:45:23.0827377Z ... ) + 2 * rank * (1 + 1j) 2025-03-17T18:45:23.0827479Z >>> tensor 2025-03-17T18:45:23.0827624Z tensor([1.+1.j, 2.+2.j], device='cuda:0') # Rank 0 2025-03-17T18:45:23.0827808Z tensor([3.+3.j, 4.+4.j], device='cuda:1') # Rank 1 2025-03-17T18:45:23.0827933Z >>> dist.all_gather(tensor_list, tensor) 2025-03-17T18:45:23.0828039Z >>> tensor_list 2025-03-17T18:45:23.0828285Z [tensor([1.+1.j, 2.+2.j], device='cuda:0'), tensor([3.+3.j, 4.+4.j], device='cuda:0')] # Rank 0 2025-03-17T18:45:23.0828543Z [tensor([1.+1.j, 2.+2.j], device='cuda:1'), tensor([3.+3.j, 4.+4.j], device='cuda:1')] # Rank 1 2025-03-17T18:45:23.0828631Z 2025-03-17T18:45:23.0828726Z 2025-03-17T18:45:23.0828987Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0829082Z 2025-03-17T18:45:23.0829187Z warnings.warn(msg) 2025-03-17T18:45:23.0829285Z 2025-03-17T18:45:23.0829489Z --- Parse Warning: 41 / 116 --- 2025-03-17T18:45:23.0830474Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=all_to_all_single in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=4381. 2025-03-17T18:45:23.0830745Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0830842Z 2025-03-17T18:45:23.0831101Z Split input tensor and then scatter the split list to all processes in a group. 2025-03-17T18:45:23.0831201Z 2025-03-17T18:45:23.0831466Z Later the received tensors are concatenated from all the processes in the group 2025-03-17T18:45:23.0831602Z and returned as a single output tensor. 2025-03-17T18:45:23.0831688Z 2025-03-17T18:45:23.0831815Z Complex tensors are supported. 2025-03-17T18:45:23.0831902Z 2025-03-17T18:45:23.0832050Z Args: 2025-03-17T18:45:23.0832223Z output (Tensor): Gathered concatenated output tensor. 2025-03-17T18:45:23.0832365Z input (Tensor): Input tensor to scatter. 2025-03-17T18:45:23.0832591Z output_split_sizes: (list[Int], optional): Output split sizes for dim 0 2025-03-17T18:45:23.0832813Z if specified None or empty, dim 0 of ``output`` tensor must divide 2025-03-17T18:45:23.0832926Z equally by ``world_size``. 2025-03-17T18:45:23.0833154Z input_split_sizes: (list[Int], optional): Input split sizes for dim 0 2025-03-17T18:45:23.0833362Z if specified None or empty, dim 0 of ``input`` tensor must divide 2025-03-17T18:45:23.0833480Z equally by ``world_size``. 2025-03-17T18:45:23.0833716Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0833855Z the default process group will be used. 2025-03-17T18:45:23.0834067Z async_op (bool, optional): Whether this op should be an async op. 2025-03-17T18:45:23.0834163Z 2025-03-17T18:45:23.0834251Z Returns: 2025-03-17T18:45:23.0834393Z Async work handle, if async_op is set to True. 2025-03-17T18:45:23.0834552Z None, if not async_op or if not part of the group. 2025-03-17T18:45:23.0834641Z 2025-03-17T18:45:23.0834740Z .. warning:: 2025-03-17T18:45:23.0834917Z `all_to_all_single` is experimental and subject to change. 2025-03-17T18:45:23.0835014Z 2025-03-17T18:45:23.0835106Z Examples: 2025-03-17T18:45:23.0835240Z >>> # xdoctest: +SKIP("Undefined rank") 2025-03-17T18:45:23.0835391Z >>> input = torch.arange(4) + rank * 4 2025-03-17T18:45:23.0835491Z >>> input 2025-03-17T18:45:23.0835596Z tensor([0, 1, 2, 3]) # Rank 0 2025-03-17T18:45:23.0835711Z tensor([4, 5, 6, 7]) # Rank 1 2025-03-17T18:45:23.0835816Z tensor([8, 9, 10, 11]) # Rank 2 2025-03-17T18:45:23.0835929Z tensor([12, 13, 14, 15]) # Rank 3 2025-03-17T18:45:23.0836074Z >>> output = torch.empty([4], dtype=torch.int64) 2025-03-17T18:45:23.0836211Z >>> dist.all_to_all_single(output, input) 2025-03-17T18:45:23.0836299Z >>> output 2025-03-17T18:45:23.0836413Z tensor([0, 4, 8, 12]) # Rank 0 2025-03-17T18:45:23.0836547Z tensor([1, 5, 9, 13]) # Rank 1 2025-03-17T18:45:23.0836661Z tensor([2, 6, 10, 14]) # Rank 2 2025-03-17T18:45:23.0836925Z tensor([3, 7, 11, 15]) # Rank 3 2025-03-17T18:45:23.0837022Z 2025-03-17T18:45:23.0837190Z >>> # Essentially, it is similar to following operation: 2025-03-17T18:45:23.0837348Z >>> scatter_list = list(input.chunk(world_size)) 2025-03-17T18:45:23.0837490Z >>> gather_list = list(output.chunk(world_size)) 2025-03-17T18:45:23.0837612Z >>> for i in range(world_size): 2025-03-17T18:45:23.0837847Z >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src = i) 2025-03-17T18:45:23.0837940Z 2025-03-17T18:45:23.0838070Z >>> # Another example with uneven split 2025-03-17T18:45:23.0838163Z >>> input 2025-03-17T18:45:23.0838323Z tensor([0, 1, 2, 3, 4, 5]) # Rank 0 2025-03-17T18:45:23.0838493Z tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 2025-03-17T18:45:23.0838653Z tensor([20, 21, 22, 23, 24]) # Rank 2 2025-03-17T18:45:23.0838822Z tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 2025-03-17T18:45:23.0838921Z >>> input_splits 2025-03-17T18:45:23.0839058Z [2, 2, 1, 1] # Rank 0 2025-03-17T18:45:23.0839180Z [3, 2, 2, 2] # Rank 1 2025-03-17T18:45:23.0839310Z [2, 1, 1, 1] # Rank 2 2025-03-17T18:45:23.0839515Z [2, 2, 2, 1] # Rank 3 2025-03-17T18:45:23.0839631Z >>> output_splits 2025-03-17T18:45:23.0839754Z [2, 3, 2, 2] # Rank 0 2025-03-17T18:45:23.0839888Z [2, 2, 1, 2] # Rank 1 2025-03-17T18:45:23.0840013Z [1, 2, 1, 2] # Rank 2 2025-03-17T18:45:23.0840150Z [1, 2, 1, 1] # Rank 3 2025-03-17T18:45:23.0840249Z >>> output = ... 2025-03-17T18:45:23.0840478Z >>> dist.all_to_all_single(output, input, output_splits, input_splits) 2025-03-17T18:45:23.0840569Z >>> output 2025-03-17T18:45:23.0840747Z tensor([ 0, 1, 10, 11, 12, 20, 21, 30, 31]) # Rank 0 2025-03-17T18:45:23.0840907Z tensor([ 2, 3, 13, 14, 22, 32, 33]) # Rank 1 2025-03-17T18:45:23.0841083Z tensor([ 4, 15, 16, 23, 34, 35]) # Rank 2 2025-03-17T18:45:23.0841240Z tensor([ 5, 17, 18, 24, 36]) # Rank 3 2025-03-17T18:45:23.0841342Z 2025-03-17T18:45:23.0841427Z 2025-03-17T18:45:23.0841603Z >>> # Another example with tensors of torch.cfloat type. 2025-03-17T18:45:23.0841712Z >>> input = torch.tensor( 2025-03-17T18:45:23.0841853Z ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat 2025-03-17T18:45:23.0841962Z ... ) + 4 * rank * (1 + 1j) 2025-03-17T18:45:23.0842052Z >>> input 2025-03-17T18:45:23.0842269Z tensor([1+1j, 2+2j, 3+3j, 4+4j]) # Rank 0 2025-03-17T18:45:23.0842444Z tensor([5+5j, 6+6j, 7+7j, 8+8j]) # Rank 1 2025-03-17T18:45:23.0842635Z tensor([9+9j, 10+10j, 11+11j, 12+12j]) # Rank 2 2025-03-17T18:45:23.0842824Z tensor([13+13j, 14+14j, 15+15j, 16+16j]) # Rank 3 2025-03-17T18:45:23.0842976Z >>> output = torch.empty([4], dtype=torch.int64) 2025-03-17T18:45:23.0843104Z >>> dist.all_to_all_single(output, input) 2025-03-17T18:45:23.0843199Z >>> output 2025-03-17T18:45:23.0843373Z tensor([1+1j, 5+5j, 9+9j, 13+13j]) # Rank 0 2025-03-17T18:45:23.0843609Z tensor([2+2j, 6+6j, 10+10j, 14+14j]) # Rank 1 2025-03-17T18:45:23.0843786Z tensor([3+3j, 7+7j, 11+11j, 15+15j]) # Rank 2 2025-03-17T18:45:23.0843969Z tensor([4+4j, 8+8j, 12+12j, 16+16j]) # Rank 3 2025-03-17T18:45:23.0844056Z 2025-03-17T18:45:23.0844325Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0844411Z 2025-03-17T18:45:23.0844523Z warnings.warn(msg) 2025-03-17T18:45:23.0844610Z 2025-03-17T18:45:23.0844830Z --- Parse Warning: 42 / 116 --- 2025-03-17T18:45:23.0845780Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=all_to_all in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/distributed_c10d.py line=4523. 2025-03-17T18:45:23.0846057Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0846143Z 2025-03-17T18:45:23.0846528Z Scatters list of input tensors to all processes in a group and return gathered list of tensors in output list. 2025-03-17T18:45:23.0846616Z 2025-03-17T18:45:23.0846745Z Complex tensors are supported. 2025-03-17T18:45:23.0846831Z 2025-03-17T18:45:23.0846926Z Args: 2025-03-17T18:45:23.0847155Z output_tensor_list (list[Tensor]): List of tensors to be gathered one 2025-03-17T18:45:23.0847260Z per rank. 2025-03-17T18:45:23.0847494Z input_tensor_list (list[Tensor]): List of tensors to scatter one per rank. 2025-03-17T18:45:23.0847789Z group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0847924Z the default process group will be used. 2025-03-17T18:45:23.0848136Z async_op (bool, optional): Whether this op should be an async op. 2025-03-17T18:45:23.0848224Z 2025-03-17T18:45:23.0848323Z Returns: 2025-03-17T18:45:23.0848465Z Async work handle, if async_op is set to True. 2025-03-17T18:45:23.0848625Z None, if not async_op or if not part of the group. 2025-03-17T18:45:23.0848706Z 2025-03-17T18:45:23.0848810Z .. warning:: 2025-03-17T18:45:23.0848965Z `all_to_all` is experimental and subject to change. 2025-03-17T18:45:23.0849058Z 2025-03-17T18:45:23.0849148Z Examples: 2025-03-17T18:45:23.0849283Z >>> # xdoctest: +SKIP("Undefined rank") 2025-03-17T18:45:23.0849402Z >>> input = torch.arange(4) + rank * 4 2025-03-17T18:45:23.0849524Z >>> input = list(input.chunk(4)) 2025-03-17T18:45:23.0849613Z >>> input 2025-03-17T18:45:23.0849800Z [tensor([0]), tensor([1]), tensor([2]), tensor([3])] # Rank 0 2025-03-17T18:45:23.0849969Z [tensor([4]), tensor([5]), tensor([6]), tensor([7])] # Rank 1 2025-03-17T18:45:23.0850148Z [tensor([8]), tensor([9]), tensor([10]), tensor([11])] # Rank 2 2025-03-17T18:45:23.0850319Z [tensor([12]), tensor([13]), tensor([14]), tensor([15])] # Rank 3 2025-03-17T18:45:23.0850517Z >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) 2025-03-17T18:45:23.0850635Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:23.0850761Z >>> output 2025-03-17T18:45:23.0850929Z [tensor([0]), tensor([4]), tensor([8]), tensor([12])] # Rank 0 2025-03-17T18:45:23.0851104Z [tensor([1]), tensor([5]), tensor([9]), tensor([13])] # Rank 1 2025-03-17T18:45:23.0851272Z [tensor([2]), tensor([6]), tensor([10]), tensor([14])] # Rank 2 2025-03-17T18:45:23.0851453Z [tensor([3]), tensor([7]), tensor([11]), tensor([15])] # Rank 3 2025-03-17T18:45:23.0851537Z 2025-03-17T18:45:23.0851714Z >>> # Essentially, it is similar to following operation: 2025-03-17T18:45:23.0851817Z >>> scatter_list = input 2025-03-17T18:45:23.0851921Z >>> gather_list = output 2025-03-17T18:45:23.0852073Z >>> for i in range(world_size): 2025-03-17T18:45:23.0852303Z >>> dist.scatter(gather_list[i], scatter_list if i == rank else [], src=i) 2025-03-17T18:45:23.0852400Z 2025-03-17T18:45:23.0852488Z >>> input 2025-03-17T18:45:23.0852657Z tensor([0, 1, 2, 3, 4, 5]) # Rank 0 2025-03-17T18:45:23.0852825Z tensor([10, 11, 12, 13, 14, 15, 16, 17, 18]) # Rank 1 2025-03-17T18:45:23.0852991Z tensor([20, 21, 22, 23, 24]) # Rank 2 2025-03-17T18:45:23.0853150Z tensor([30, 31, 32, 33, 34, 35, 36]) # Rank 3 2025-03-17T18:45:23.0853262Z >>> input_splits 2025-03-17T18:45:23.0853386Z [2, 2, 1, 1] # Rank 0 2025-03-17T18:45:23.0853530Z [3, 2, 2, 2] # Rank 1 2025-03-17T18:45:23.0853652Z [2, 1, 1, 1] # Rank 2 2025-03-17T18:45:23.0853790Z [2, 2, 2, 1] # Rank 3 2025-03-17T18:45:23.0853890Z >>> output_splits 2025-03-17T18:45:23.0854027Z [2, 3, 2, 2] # Rank 0 2025-03-17T18:45:23.0854153Z [2, 2, 1, 2] # Rank 1 2025-03-17T18:45:23.0854291Z [1, 2, 1, 2] # Rank 2 2025-03-17T18:45:23.0854418Z [1, 2, 1, 1] # Rank 3 2025-03-17T18:45:23.0854613Z >>> input = list(input.split(input_splits)) 2025-03-17T18:45:23.0854708Z >>> input 2025-03-17T18:45:23.0854934Z [tensor([0, 1]), tensor([2, 3]), tensor([4]), tensor([5])] # Rank 0 2025-03-17T18:45:23.0855144Z [tensor([10, 11, 12]), tensor([13, 14]), tensor([15, 16]), tensor([17, 18])] # Rank 1 2025-03-17T18:45:23.0855368Z [tensor([20, 21]), tensor([22]), tensor([23]), tensor([24])] # Rank 2 2025-03-17T18:45:23.0855579Z [tensor([30, 31]), tensor([32, 33]), tensor([34, 35]), tensor([36])] # Rank 3 2025-03-17T18:45:23.0855694Z >>> output = ... 2025-03-17T18:45:23.0855817Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:23.0855922Z >>> output 2025-03-17T18:45:23.0856129Z [tensor([0, 1]), tensor([10, 11, 12]), tensor([20, 21]), tensor([30, 31])] # Rank 0 2025-03-17T18:45:23.0856343Z [tensor([2, 3]), tensor([13, 14]), tensor([22]), tensor([32, 33])] # Rank 1 2025-03-17T18:45:23.0856556Z [tensor([4]), tensor([15, 16]), tensor([23]), tensor([34, 35])] # Rank 2 2025-03-17T18:45:23.0856776Z [tensor([5]), tensor([17, 18]), tensor([24]), tensor([36])] # Rank 3 2025-03-17T18:45:23.0856864Z 2025-03-17T18:45:23.0857044Z >>> # Another example with tensors of torch.cfloat type. 2025-03-17T18:45:23.0857158Z >>> input = torch.tensor( 2025-03-17T18:45:23.0857318Z ... [1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j], dtype=torch.cfloat 2025-03-17T18:45:23.0857420Z ... ) + 4 * rank * (1 + 1j) 2025-03-17T18:45:23.0857547Z >>> input = list(input.chunk(4)) 2025-03-17T18:45:23.0857664Z >>> input 2025-03-17T18:45:23.0857893Z [tensor([1+1j]), tensor([2+2j]), tensor([3+3j]), tensor([4+4j])] # Rank 0 2025-03-17T18:45:23.0858112Z [tensor([5+5j]), tensor([6+6j]), tensor([7+7j]), tensor([8+8j])] # Rank 1 2025-03-17T18:45:23.0858351Z [tensor([9+9j]), tensor([10+10j]), tensor([11+11j]), tensor([12+12j])] # Rank 2 2025-03-17T18:45:23.0858576Z [tensor([13+13j]), tensor([14+14j]), tensor([15+15j]), tensor([16+16j])] # Rank 3 2025-03-17T18:45:23.0858773Z >>> output = list(torch.empty([4], dtype=torch.int64).chunk(4)) 2025-03-17T18:45:23.0858898Z >>> dist.all_to_all(output, input) 2025-03-17T18:45:23.0859031Z >>> output 2025-03-17T18:45:23.0859248Z [tensor([1+1j]), tensor([5+5j]), tensor([9+9j]), tensor([13+13j])] # Rank 0 2025-03-17T18:45:23.0859475Z [tensor([2+2j]), tensor([6+6j]), tensor([10+10j]), tensor([14+14j])] # Rank 1 2025-03-17T18:45:23.0859693Z [tensor([3+3j]), tensor([7+7j]), tensor([11+11j]), tensor([15+15j])] # Rank 2 2025-03-17T18:45:23.0859920Z [tensor([4+4j]), tensor([8+8j]), tensor([12+12j]), tensor([16+16j])] # Rank 3 2025-03-17T18:45:23.0860006Z 2025-03-17T18:45:23.0860098Z 2025-03-17T18:45:23.0860360Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0860457Z 2025-03-17T18:45:23.0860563Z warnings.warn(msg) 2025-03-17T18:45:23.0860659Z 2025-03-17T18:45:23.0860864Z --- Parse Warning: 43 / 116 --- 2025-03-17T18:45:23.0861739Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=__doc__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/launch.py line=2. 2025-03-17T18:45:23.0862012Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0862108Z 2025-03-17T18:45:23.0862236Z Module ``torch.distributed.launch``. 2025-03-17T18:45:23.0862334Z 2025-03-17T18:45:23.0862591Z ``torch.distributed.launch`` is a module that spawns up multiple distributed 2025-03-17T18:45:23.0862761Z training processes on each of the training nodes. 2025-03-17T18:45:23.0862847Z 2025-03-17T18:45:23.0862951Z .. warning:: 2025-03-17T18:45:23.0863040Z 2025-03-17T18:45:23.0863362Z This module is going to be deprecated in favor of :ref:`torchrun `. 2025-03-17T18:45:23.0863450Z 2025-03-17T18:45:23.0863707Z The utility can be used for single-node distributed training, in which one or 2025-03-17T18:45:23.0863953Z more processes per node will be spawned. The utility can be used for either 2025-03-17T18:45:23.0864189Z CPU training or GPU training. If the utility is used for GPU training, 2025-03-17T18:45:23.0864441Z each distributed process will be operating on a single GPU. This can achieve 2025-03-17T18:45:23.0864698Z well-improved single-node training performance. It can also be used in 2025-03-17T18:45:23.0864975Z multi-node distributed training, by spawning up multiple processes on each node 2025-03-17T18:45:23.0865225Z for well-improved multi-node distributed training performance as well. 2025-03-17T18:45:23.0865464Z This will especially be beneficial for systems with multiple Infiniband 2025-03-17T18:45:23.0865743Z interfaces that have direct-GPU support, since all of them can be utilized for 2025-03-17T18:45:23.0865867Z aggregated communication bandwidth. 2025-03-17T18:45:23.0865966Z 2025-03-17T18:45:23.0866206Z In both cases of single-node distributed training or multi-node distributed 2025-03-17T18:45:23.0866554Z training, this utility will launch the given number of processes per node 2025-03-17T18:45:23.0866791Z (``--nproc-per-node``). If used for GPU training, this number needs to be less 2025-03-17T18:45:23.0867026Z or equal to the number of GPUs on the current system (``nproc_per_node``), 2025-03-17T18:45:23.0867266Z and each process will be operating on a single GPU from *GPU 0 to 2025-03-17T18:45:23.0867389Z GPU (nproc_per_node - 1)*. 2025-03-17T18:45:23.0867475Z 2025-03-17T18:45:23.0867591Z **How to use this module:** 2025-03-17T18:45:23.0867677Z 2025-03-17T18:45:23.0867845Z 1. Single-Node multi-process distributed training 2025-03-17T18:45:23.0867931Z 2025-03-17T18:45:23.0868040Z :: 2025-03-17T18:45:23.0868125Z 2025-03-17T18:45:23.0868379Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:23.0868577Z YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other 2025-03-17T18:45:23.0868746Z arguments of your training script) 2025-03-17T18:45:23.0868833Z 2025-03-17T18:45:23.0869050Z 2. Multi-Node multi-process distributed training: (e.g. two nodes) 2025-03-17T18:45:23.0869146Z 2025-03-17T18:45:23.0869231Z 2025-03-17T18:45:23.0869392Z Node 1: *(IP: 192.168.1.1, and has a free port: 1234)* 2025-03-17T18:45:23.0869480Z 2025-03-17T18:45:23.0869580Z :: 2025-03-17T18:45:23.0869666Z 2025-03-17T18:45:23.0869919Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:23.0870084Z --nnodes=2 --node-rank=0 --master-addr="192.168.1.1" 2025-03-17T18:45:23.0870312Z --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 2025-03-17T18:45:23.0870472Z and all other arguments of your training script) 2025-03-17T18:45:23.0870567Z 2025-03-17T18:45:23.0870656Z Node 2: 2025-03-17T18:45:23.0870753Z 2025-03-17T18:45:23.0870843Z :: 2025-03-17T18:45:23.0870939Z 2025-03-17T18:45:23.0871175Z python -m torch.distributed.launch --nproc-per-node=NUM_GPUS_YOU_HAVE 2025-03-17T18:45:23.0871346Z --nnodes=2 --node-rank=1 --master-addr="192.168.1.1" 2025-03-17T18:45:23.0871559Z --master-port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 2025-03-17T18:45:23.0871725Z and all other arguments of your training script) 2025-03-17T18:45:23.0871812Z 2025-03-17T18:45:23.0871992Z 3. To look up what optional arguments this module offers: 2025-03-17T18:45:23.0872077Z 2025-03-17T18:45:23.0872177Z :: 2025-03-17T18:45:23.0872263Z 2025-03-17T18:45:23.0872460Z python -m torch.distributed.launch --help 2025-03-17T18:45:23.0872547Z 2025-03-17T18:45:23.0872644Z 2025-03-17T18:45:23.0872751Z **Important Notices:** 2025-03-17T18:45:23.0872835Z 2025-03-17T18:45:23.0873043Z 1. This utility and multi-process distributed (single-node or 2025-03-17T18:45:23.0873304Z multi-node) GPU training currently only achieves the best performance using 2025-03-17T18:45:23.0873574Z the NCCL distributed backend. Thus NCCL backend is the recommended backend to 2025-03-17T18:45:23.0873678Z use for GPU training. 2025-03-17T18:45:23.0873774Z 2025-03-17T18:45:23.0873997Z 2. In your training program, you must parse the command-line argument: 2025-03-17T18:45:23.0874250Z ``--local-rank=LOCAL_PROCESS_RANK``, which will be provided by this module. 2025-03-17T18:45:23.0874486Z If your training program uses GPUs, you should ensure that your code only 2025-03-17T18:45:23.0874704Z runs on the GPU device of LOCAL_PROCESS_RANK. This can be done by: 2025-03-17T18:45:23.0874791Z 2025-03-17T18:45:23.0874916Z Parsing the local_rank argument 2025-03-17T18:45:23.0875001Z 2025-03-17T18:45:23.0875099Z :: 2025-03-17T18:45:23.0875184Z 2025-03-17T18:45:23.0875299Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0875401Z >>> import argparse 2025-03-17T18:45:23.0875548Z >>> parser = argparse.ArgumentParser() 2025-03-17T18:45:23.0875748Z >>> parser.add_argument("--local-rank", "--local_rank", type=int) 2025-03-17T18:45:23.0875874Z >>> args = parser.parse_args() 2025-03-17T18:45:23.0875958Z 2025-03-17T18:45:23.0876146Z Set your device to local rank using either 2025-03-17T18:45:23.0876234Z 2025-03-17T18:45:23.0876329Z :: 2025-03-17T18:45:23.0876414Z 2025-03-17T18:45:23.0876635Z >>> torch.cuda.set_device(args.local_rank) # before your code runs 2025-03-17T18:45:23.0876722Z 2025-03-17T18:45:23.0876819Z or 2025-03-17T18:45:23.0876906Z 2025-03-17T18:45:23.0877006Z :: 2025-03-17T18:45:23.0877095Z 2025-03-17T18:45:23.0877236Z >>> with torch.cuda.device(args.local_rank): 2025-03-17T18:45:23.0877354Z >>> # your code to run 2025-03-17T18:45:23.0877444Z >>> ... 2025-03-17T18:45:23.0877543Z 2025-03-17T18:45:23.0877684Z .. versionchanged:: 2.0.0 2025-03-17T18:45:23.0877782Z 2025-03-17T18:45:23.0878039Z The launcher will passes the ``--local-rank=`` argument to your script. 2025-03-17T18:45:23.0878294Z From PyTorch 2.0.0 onwards, the dashed ``--local-rank`` is preferred over the 2025-03-17T18:45:23.0878444Z previously used underscored ``--local_rank``. 2025-03-17T18:45:23.0878540Z 2025-03-17T18:45:23.0878783Z For backward compatibility, it may be necessary for users to handle both 2025-03-17T18:45:23.0879068Z cases in their argument parsing code. This means including both ``"--local-rank"`` 2025-03-17T18:45:23.0879293Z and ``"--local_rank"`` in the argument parser. If only ``"--local_rank"`` is 2025-03-17T18:45:23.0879561Z provided, the launcher will trigger an error: "error: unrecognized arguments: 2025-03-17T18:45:23.0879801Z --local-rank=". For training code that only supports PyTorch 2.0.0+, 2025-03-17T18:45:23.0879965Z including ``"--local-rank"`` should be sufficient. 2025-03-17T18:45:23.0880054Z 2025-03-17T18:45:23.0880304Z 3. In your training program, you are supposed to call the following function 2025-03-17T18:45:23.0880550Z at the beginning to start the distributed backend. It is strongly recommended 2025-03-17T18:45:23.0880791Z that ``init_method=env://``. Other init methods (e.g. ``tcp://``) may work, 2025-03-17T18:45:23.0880992Z but ``env://`` is the one that is officially supported by this module. 2025-03-17T18:45:23.0881091Z 2025-03-17T18:45:23.0881181Z :: 2025-03-17T18:45:23.0881275Z 2025-03-17T18:45:23.0881486Z >>> torch.distributed.init_process_group(backend='YOUR BACKEND', 2025-03-17T18:45:23.0881687Z >>> init_method='env://') 2025-03-17T18:45:23.0881777Z 2025-03-17T18:45:23.0882035Z 4. In your training program, you can either use regular distributed functions 2025-03-17T18:45:23.0882279Z or use :func:`torch.nn.parallel.DistributedDataParallel` module. If your 2025-03-17T18:45:23.0882509Z training program uses GPUs for training and you would like to use 2025-03-17T18:45:23.0882705Z :func:`torch.nn.parallel.DistributedDataParallel` module, 2025-03-17T18:45:23.0882831Z here is how to configure it. 2025-03-17T18:45:23.0882919Z 2025-03-17T18:45:23.0883025Z :: 2025-03-17T18:45:23.0883114Z 2025-03-17T18:45:23.0883330Z >>> model = torch.nn.parallel.DistributedDataParallel(model, 2025-03-17T18:45:23.0883476Z >>> device_ids=[args.local_rank], 2025-03-17T18:45:23.0883638Z >>> output_device=args.local_rank) 2025-03-17T18:45:23.0883727Z 2025-03-17T18:45:23.0883995Z Please ensure that ``device_ids`` argument is set to be the only GPU device id 2025-03-17T18:45:23.0884236Z that your code will be operating on. This is generally the local rank of the 2025-03-17T18:45:23.0884502Z process. In other words, the ``device_ids`` needs to be ``[args.local_rank]``, 2025-03-17T18:45:23.0884728Z and ``output_device`` needs to be ``args.local_rank`` in order to use this 2025-03-17T18:45:23.0884833Z utility 2025-03-17T18:45:23.0884921Z 2025-03-17T18:45:23.0885186Z 5. Another way to pass ``local_rank`` to the subprocesses via environment variable 2025-03-17T18:45:23.0885438Z ``LOCAL_RANK``. This behavior is enabled when you launch the script with 2025-03-17T18:45:23.0885678Z ``--use-env=True``. You must adjust the subprocess example above to replace 2025-03-17T18:45:23.0885881Z ``args.local_rank`` with ``os.environ['LOCAL_RANK']``; the launcher 2025-03-17T18:45:23.0886083Z will not pass ``--local-rank`` when you specify this flag. 2025-03-17T18:45:23.0886173Z 2025-03-17T18:45:23.0886283Z .. warning:: 2025-03-17T18:45:23.0886373Z 2025-03-17T18:45:23.0886596Z ``local_rank`` is NOT globally unique: it is only unique per process 2025-03-17T18:45:23.0886818Z on a machine. Thus, don't use it to decide if you should, e.g., 2025-03-17T18:45:23.0886961Z write to a networked filesystem. See 2025-03-17T18:45:23.0887189Z https://github.com/pytorch/pytorch/issues/12042 for an example of 2025-03-17T18:45:23.0887373Z how things can go wrong if you don't do this correctly. 2025-03-17T18:45:23.0887464Z 2025-03-17T18:45:23.0887554Z 2025-03-17T18:45:23.0887650Z 2025-03-17T18:45:23.0887738Z 2025-03-17T18:45:23.0888008Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0888096Z 2025-03-17T18:45:23.0888209Z warnings.warn(msg) 2025-03-17T18:45:23.0888301Z 2025-03-17T18:45:23.0888535Z --- Parse Warning: 44 / 116 --- 2025-03-17T18:45:23.0889600Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=init_from_local_shards in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/__init__.py line=361. 2025-03-17T18:45:23.0889887Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0889969Z 2025-03-17T18:45:23.0890229Z Creates an :class:`ShardedTensor` from local shards and the global metadata. 2025-03-17T18:45:23.0890385Z Needs to be called on all ranks in an SPMD fashion. 2025-03-17T18:45:23.0890483Z 2025-03-17T18:45:23.0890572Z Args: 2025-03-17T18:45:23.0890865Z local_shards (List[:class `torch.distributed._shard.sharded_tensor.Shard`]): A list 2025-03-17T18:45:23.0891040Z of shards that represent the local shards on this rank. 2025-03-17T18:45:23.0891339Z global_size (int...): a list, tuple, or `torch.Size` of integers defining the 2025-03-17T18:45:23.0891468Z shape of the overall sharded tensor. 2025-03-17T18:45:23.0891566Z 2025-03-17T18:45:23.0891661Z Keyword args: 2025-03-17T18:45:23.0891940Z process_group (ProcessGroup, optional): The process group to work on. If None, 2025-03-17T18:45:23.0892079Z the default process group will be used. 2025-03-17T18:45:23.0892274Z init_rrefs (bool, optional): Whether or not to initialize 2025-03-17T18:45:23.0892487Z :class:`torch.distributed.rpc.RRef`s pointing to remote shards. 2025-03-17T18:45:23.0892704Z Need to initialize the RPC Framework if specified as ``True``. 2025-03-17T18:45:23.0892814Z Default: ``False``. 2025-03-17T18:45:23.0892916Z 2025-03-17T18:45:23.0893008Z Returns: 2025-03-17T18:45:23.0893179Z A :class:`ShardedTensor` object handle on this rank 2025-03-17T18:45:23.0893265Z 2025-03-17T18:45:23.0893370Z 2025-03-17T18:45:23.0893464Z Examples: 2025-03-17T18:45:23.0893740Z Suppose we want construct a sharded tensor on two ranks, global size = (10, 5), 2025-03-17T18:45:23.0893931Z each shard have a (5, 5) local tensor, we can do it like below: 2025-03-17T18:45:23.0894032Z 2025-03-17T18:45:23.0894125Z on rank 0: 2025-03-17T18:45:23.0894263Z >>> # xdoctest: +SKIP("not distributed") 2025-03-17T18:45:23.0894392Z >>> local_shard_metadata = ShardMetadata( 2025-03-17T18:45:23.0894513Z >>> shard_offsets=[0, 0], 2025-03-17T18:45:23.0894652Z >>> shard_lengths=[5, 5], 2025-03-17T18:45:23.0894779Z >>> placement="rank:0/cuda:0" 2025-03-17T18:45:23.0894867Z >>> ) 2025-03-17T18:45:23.0895064Z >>> local_shards = [Shard(torch.randn(5, 5), local_shard_metadata)] 2025-03-17T18:45:23.0895274Z >>> sharded_tensor = init_from_local_shards(local_shards, [10, 5]) 2025-03-17T18:45:23.0895360Z 2025-03-17T18:45:23.0895469Z on rank 1: 2025-03-17T18:45:23.0895593Z >>> # xdoctest: +SKIP("not distributed") 2025-03-17T18:45:23.0895732Z >>> local_shard_metadata = ShardMetadata( 2025-03-17T18:45:23.0895839Z >>> shard_offsets=[5, 0], 2025-03-17T18:45:23.0895988Z >>> shard_lengths=[5, 5], 2025-03-17T18:45:23.0896101Z >>> placement="rank:1/cuda:1" 2025-03-17T18:45:23.0896200Z >>> ) 2025-03-17T18:45:23.0896401Z >>> local_shards = [Shard(torch.randn(5, 5), local_shard_metadata)] 2025-03-17T18:45:23.0896615Z >>> sharded_tensor = init_from_local_shards(local_shards, [10, 5]) 2025-03-17T18:45:23.0896706Z 2025-03-17T18:45:23.0896980Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0897066Z 2025-03-17T18:45:23.0897183Z warnings.warn(msg) 2025-03-17T18:45:23.0897268Z 2025-03-17T18:45:23.0897478Z --- Parse Warning: 45 / 116 --- 2025-03-17T18:45:23.0898588Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ShardedTensor._init_from_local_tensor in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/api.py line=799. 2025-03-17T18:45:23.0898872Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0898962Z 2025-03-17T18:45:23.0899245Z Initialize a ShardedTensor given only one local tensor, global sharded tensor 2025-03-17T18:45:23.0899367Z size and sharding spec on each rank. 2025-03-17T18:45:23.0899470Z 2025-03-17T18:45:23.0899559Z Args: 2025-03-17T18:45:23.0899803Z local_tensor (Tensor): Single tensor of local shard stored in each rank. 2025-03-17T18:45:23.0900067Z sharding_spec (:class:`torch.distributed._shard.sharding_spec.ShardingSpec`): 2025-03-17T18:45:23.0900250Z The specification describing how to shard the Tensor. 2025-03-17T18:45:23.0900475Z global_size (Sequence[int]): Size of the sharded tensor. 2025-03-17T18:45:23.0900746Z process_group (ProcessGroup, optional): The process group to aggregate on. 2025-03-17T18:45:23.0900844Z Default: None 2025-03-17T18:45:23.0901032Z init_rrefs (bool, optional): Whether or not to initialize 2025-03-17T18:45:23.0901248Z :class:`torch.distributed.rpc.RRef`s pointing to remote shards. 2025-03-17T18:45:23.0901456Z Need to initialize the RPC Framework if specified as ``True``. 2025-03-17T18:45:23.0901558Z Default: ``False``. 2025-03-17T18:45:23.0901658Z 2025-03-17T18:45:23.0901749Z Returns: 2025-03-17T18:45:23.0902011Z A :class:`ShardedTensor` sharded based on the given sharding_spec with local 2025-03-17T18:45:23.0902138Z tensor stored in the current rank. 2025-03-17T18:45:23.0902237Z 2025-03-17T18:45:23.0902328Z Examples: 2025-03-17T18:45:23.0902448Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0902591Z >>> # All tensors below are of torch.int64 type. 2025-03-17T18:45:23.0902726Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0902908Z >>> tensor = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank 2025-03-17T18:45:23.0903126Z >>> local_tensor = torch.unsqueeze(torch.cat([tensor, tensor + 2])) 2025-03-17T18:45:23.0903227Z >>> local_tensor 2025-03-17T18:45:23.0903343Z tensor([[1, 2, 3, 4]]) # Rank 0 2025-03-17T18:45:23.0903448Z tensor([[3, 4, 5, 6]]) # Rank 1 2025-03-17T18:45:23.0903561Z >>> sharding_dim = 0 2025-03-17T18:45:23.0903719Z >>> sharding_spec = ChunkShardingSpec( 2025-03-17T18:45:23.0903823Z dim=sharding_dim, 2025-03-17T18:45:23.0903936Z placements=[ 2025-03-17T18:45:23.0904038Z "rank:0/cuda:0", 2025-03-17T18:45:23.0904150Z "rank:1/cuda:1", 2025-03-17T18:45:23.0904239Z ], 2025-03-17T18:45:23.0904341Z ) 2025-03-17T18:45:23.0904484Z >>> st = ShardedTensor._init_from_local_tensor( 2025-03-17T18:45:23.0904624Z ... local_tensor, sharding_spec, [2, 4] 2025-03-17T18:45:23.0904711Z ... ) 2025-03-17T18:45:23.0904813Z >>> st 2025-03-17T18:45:23.0904911Z ShardedTensor( 2025-03-17T18:45:23.0905068Z ShardedTensorMetadata( 2025-03-17T18:45:23.0905172Z shards_metadata=[ 2025-03-17T18:45:23.0905465Z ShardMetadata(shard_offsets=[0, 0], shard_sizes=[1, 4], placement=rank:0/cuda:0), 2025-03-17T18:45:23.0905735Z ShardMetadata(shard_offsets=[1, 0], shard_sizes=[1, 4], placement=rank:1/cuda:1), 2025-03-17T18:45:23.0905837Z ], 2025-03-17T18:45:23.0905948Z size=torch.Size([2, 4]) 2025-03-17T18:45:23.0906047Z ) 2025-03-17T18:45:23.0906150Z >>> st.local_tensor() 2025-03-17T18:45:23.0906269Z tensor([1, 2, 3, 4]) # Rank 0 2025-03-17T18:45:23.0906372Z tensor([3, 4, 5, 6]) # Rank 1 2025-03-17T18:45:23.0906566Z 2025-03-17T18:45:23.0906847Z Warning: This API is experimental and subject to change. It lacks of a fully across 2025-03-17T18:45:23.0907107Z rank validations, and we only validate the local shard on the current rank. 2025-03-17T18:45:23.0907328Z We fully rely on the user to ensure local tensor is sharded based on the 2025-03-17T18:45:23.0907445Z sharding spec. 2025-03-17T18:45:23.0907531Z 2025-03-17T18:45:23.0907803Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0907893Z 2025-03-17T18:45:23.0908008Z warnings.warn(msg) 2025-03-17T18:45:23.0908093Z 2025-03-17T18:45:23.0908310Z --- Parse Warning: 46 / 116 --- 2025-03-17T18:45:23.0909436Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ShardedTensor.reshard in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharded_tensor/api.py line=1040. 2025-03-17T18:45:23.0909721Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0909808Z 2025-03-17T18:45:23.0910081Z Reshard a sharded tensor given the ``resharding_spec``. For now, we only support 2025-03-17T18:45:23.0910183Z single local shard. 2025-03-17T18:45:23.0910281Z 2025-03-17T18:45:23.0910510Z If ``resharding_spec`` is same as the original one, this becomes a no-op. 2025-03-17T18:45:23.0910771Z If only ``resharding_spec`` shares the same sharding dim with the original one, 2025-03-17T18:45:23.0910888Z we swap local shards directly. 2025-03-17T18:45:23.0911174Z For more generic cases, we merge different shards across different ranks and split 2025-03-17T18:45:23.0911432Z the local shards based on the ``resharding_spec`` via `all_to_all` collective API. 2025-03-17T18:45:23.0911535Z 2025-03-17T18:45:23.0911622Z Args: 2025-03-17T18:45:23.0911938Z resharding_spec (:class:`torch.distributed._shard.sharding_spec.ShardingSpec`): The 2025-03-17T18:45:23.0912109Z specification describing how the tensor is sharded. 2025-03-17T18:45:23.0912208Z 2025-03-17T18:45:23.0912298Z Returns: 2025-03-17T18:45:23.0912509Z A :class:`ShardedTensor` object whose local shards are resharded. 2025-03-17T18:45:23.0912608Z 2025-03-17T18:45:23.0912698Z Examples: 2025-03-17T18:45:23.0912812Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0912937Z >>> # We have 2 process groups, 2 ranks. 2025-03-17T18:45:23.0913129Z >>> tensor = torch.arange(4, dtype=torch.int64) + 1 + 2 * rank 2025-03-17T18:45:23.0913284Z >>> tensor = torch.stack([tensor, tensor]) 2025-03-17T18:45:23.0913384Z >>> tensor 2025-03-17T18:45:23.0913511Z tensor([[1, 2, 3, 4], [1, 2, 3, 4]]) # Rank 0 2025-03-17T18:45:23.0913648Z tensor([[3, 4, 5, 6], [3, 4, 5, 6]]) # Rank 1 2025-03-17T18:45:23.0913776Z tensor([[5, 6, 7, 8], [5, 6, 7, 8]]) # Rank 2 2025-03-17T18:45:23.0913921Z tensor([[7, 8, 9, 10], [7, 8, 9, 10]]) # Rank 3 2025-03-17T18:45:23.0914024Z >>> sharding_dim = 0 2025-03-17T18:45:23.0914154Z >>> spec = ChunkShardingSpec( 2025-03-17T18:45:23.0914259Z dim=sharding_dim, 2025-03-17T18:45:23.0914396Z placements=[ 2025-03-17T18:45:23.0914499Z "rank:0/cuda:0", 2025-03-17T18:45:23.0914611Z "rank:1/cuda:1", 2025-03-17T18:45:23.0914712Z "rank:2/cuda:2", 2025-03-17T18:45:23.0914823Z "rank:3/cuda:3", 2025-03-17T18:45:23.0914914Z ], 2025-03-17T18:45:23.0915017Z ) 2025-03-17T18:45:23.0915128Z >>> current_offsets = [0] * 2 2025-03-17T18:45:23.0915256Z >>> current_offsets[0] = rank * 2 2025-03-17T18:45:23.0915376Z >>> shard_metadata = ShardMetadata( 2025-03-17T18:45:23.0915546Z shard_offsets=copy.deepcopy(current_offsets), 2025-03-17T18:45:23.0915666Z shard_sizes=tensor.size(), 2025-03-17T18:45:23.0915803Z placement=spec.placements[rank], 2025-03-17T18:45:23.0915888Z ) 2025-03-17T18:45:23.0916003Z >>> local_shards = [ 2025-03-17T18:45:23.0916092Z Shard( 2025-03-17T18:45:23.0916211Z tensor=tensor, 2025-03-17T18:45:23.0916327Z metadata=shard_metadata, 2025-03-17T18:45:23.0916414Z ) 2025-03-17T18:45:23.0916511Z ] 2025-03-17T18:45:23.0916745Z >>> st = ShardedTensor._init_from_local_shards(local_shards, tensor.size()) 2025-03-17T18:45:23.0916859Z >>> sharding_dim = 1 2025-03-17T18:45:23.0916986Z >>> resharding_spec = ChunkShardingSpec( 2025-03-17T18:45:23.0917097Z dim=sharding_dim, 2025-03-17T18:45:23.0917199Z placements=[ 2025-03-17T18:45:23.0917306Z "rank:0/cuda:0", 2025-03-17T18:45:23.0917403Z "rank:1/cuda:1", 2025-03-17T18:45:23.0917555Z "rank:2/cuda:2", 2025-03-17T18:45:23.0917656Z "rank:3/cuda:3", 2025-03-17T18:45:23.0917752Z ], 2025-03-17T18:45:23.0917840Z ) 2025-03-17T18:45:23.0917965Z >>> st.reshard(resharding_spec) 2025-03-17T18:45:23.0918091Z >>> tensor = st.local_shards()[0].tensor 2025-03-17T18:45:23.0918190Z >>> tensor 2025-03-17T18:45:23.0918334Z tensor([[1], [1], [3], [3], [5], [5], [7], [7]]) # Rank 0 2025-03-17T18:45:23.0918487Z tensor([[2], [2], [4], [4], [6], [6], [8], [8]]) # Rank 1 2025-03-17T18:45:23.0918630Z tensor([[3], [3], [5], [5], [7], [7], [9], [9]]) # Rank 2 2025-03-17T18:45:23.0918785Z tensor([[4], [4], [6], [6], [8], [8], [10], [10]]) # Rank 3 2025-03-17T18:45:23.0918869Z 2025-03-17T18:45:23.0919142Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0919231Z 2025-03-17T18:45:23.0919341Z warnings.warn(msg) 2025-03-17T18:45:23.0919426Z 2025-03-17T18:45:23.0919635Z --- Parse Warning: 47 / 116 --- 2025-03-17T18:45:23.0920631Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ShardingPlan in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/_shard/sharding_plan/api.py line=12. 2025-03-17T18:45:23.0920913Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0920998Z 2025-03-17T18:45:23.0921235Z Representation of a sharding plan, describes how to shard a module 2025-03-17T18:45:23.0921554Z across hosts. `plan` is used to shard module parameters according to the spec provided, 2025-03-17T18:45:23.0921858Z `output_plan` and `return_local_tensor` are optional, they are used to specify the output 2025-03-17T18:45:23.0922113Z layout of a module with a spec, and when to convert back to data parallel fashion. 2025-03-17T18:45:23.0922213Z 2025-03-17T18:45:23.0922299Z Args: 2025-03-17T18:45:23.0922587Z plan (Dict[str, Union[:class:`torch.distributed._shard.sharding_spec.ShardingSpec`, 2025-03-17T18:45:23.0922758Z :class:`torch.distributed._shard.sharder.Sharder`]): 2025-03-17T18:45:23.0923073Z a dict describes how to shard a module, there're currently two ways to shard a module: 2025-03-17T18:45:23.0923332Z 1. directly shard a module parameter by a `ShardingSpec`, keyed by the name of 2025-03-17T18:45:23.0923471Z a parameter to a `ShardingSpec`. 2025-03-17T18:45:23.0923734Z 2. shard a submodule by applying a `Sharder` on it, keyed by the name of a module 2025-03-17T18:45:23.0923854Z to a `Sharder` object. 2025-03-17T18:45:23.0924186Z output_plan (Dict[str, :class:`torch.distributed._shard.sharding_spec.ShardingSpec`), optional): 2025-03-17T18:45:23.0924467Z a dict specifies the layout of a module's output which produces a ShardedTensor, 2025-03-17T18:45:23.0924706Z keyed by the name of module to ShardingSpec("" in key means the root module). 2025-03-17T18:45:23.0924818Z Default: `None` 2025-03-17T18:45:23.0925076Z return_local_tensor (List[str], optional): a list of string, each element enables 2025-03-17T18:45:23.0925332Z a module's sharded output to be returned as a Tensor from its local shards to 2025-03-17T18:45:23.0925587Z ensure further processing in a data parallel fashion. ("" in list means the 2025-03-17T18:45:23.0925695Z root module). 2025-03-17T18:45:23.0925794Z Default: None 2025-03-17T18:45:23.0925895Z Example: 2025-03-17T18:45:23.0926185Z Suppose we want to shard a module with two linear layers and then run it with DDP, we also 2025-03-17T18:45:23.0926483Z want to convert the output of the second linear layer back to DDP, we can do it as follows: 2025-03-17T18:45:23.0926568Z 2025-03-17T18:45:23.0926801Z >>> # xdoctest: +REQUIRES(module:torch._C._distributed_c10d) 2025-03-17T18:45:23.0926919Z >>> class MyModule(nn.Module): 2025-03-17T18:45:23.0927046Z >>> def __init__(self) -> None: 2025-03-17T18:45:23.0927150Z >>> super().__init__() 2025-03-17T18:45:23.0927275Z >>> self.fc1 = nn.Linear() 2025-03-17T18:45:23.0927384Z >>> self.gelu = nn.GELU() 2025-03-17T18:45:23.0927503Z >>> self.fc2 = nn.Linear() 2025-03-17T18:45:23.0927613Z >>> self.relu = nn.Linear() 2025-03-17T18:45:23.0927707Z >>> 2025-03-17T18:45:23.0927819Z >>> def forward(self, input): 2025-03-17T18:45:23.0928008Z >>> return self.relu(self.fc2(self.gelu(self.fc1(input)))) 2025-03-17T18:45:23.0928094Z 2025-03-17T18:45:23.0928177Z 2025-03-17T18:45:23.0928324Z >>> # xdoctest: +SKIP("Undefined spec1, spec2) 2025-03-17T18:45:23.0928437Z >>> sharding_plan = ShardingPlan( 2025-03-17T18:45:23.0928546Z >>> plan={ 2025-03-17T18:45:23.0928652Z >>> "fc1.weight": spec1, 2025-03-17T18:45:23.0928768Z >>> "fc2.weight": spec2 2025-03-17T18:45:23.0928853Z >>> }, 2025-03-17T18:45:23.0928967Z >>> output_plan={ 2025-03-17T18:45:23.0929074Z >>> "fc2": output_spec 2025-03-17T18:45:23.0929174Z >>> }, 2025-03-17T18:45:23.0929283Z >>> return_local_tensor=["fc2"] 2025-03-17T18:45:23.0929383Z >>> ) 2025-03-17T18:45:23.0929467Z 2025-03-17T18:45:23.0929738Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0929850Z 2025-03-17T18:45:23.0929964Z warnings.warn(msg) 2025-03-17T18:45:23.0930048Z 2025-03-17T18:45:23.0930253Z --- Parse Warning: 48 / 116 --- 2025-03-17T18:45:23.0931385Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=post_localSGD_hook in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/post_localSGD_hook.py line=72. 2025-03-17T18:45:23.0931663Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0931748Z 2025-03-17T18:45:23.0931897Z Run post-localSGD algorithm. 2025-03-17T18:45:23.0931981Z 2025-03-17T18:45:23.0932233Z This DDP communication hook is used for running post-localSGD algorithm, 2025-03-17T18:45:23.0932396Z by combining with a model averaging component (e.g., 2025-03-17T18:45:23.0932737Z :class:`~torch.distributed.algorithms.model_averaging.averagers.PeriodicModelAverager`) 2025-03-17T18:45:23.0932860Z that runs after the optimizer step. 2025-03-17T18:45:23.0932956Z 2025-03-17T18:45:23.0933045Z Args: 2025-03-17T18:45:23.0933281Z state (PostLocalSGDState): State information to run post-localSGD. 2025-03-17T18:45:23.0933570Z Users mainly need to tune ``start_localSGD_iter`` to determine when to start local SGD. 2025-03-17T18:45:23.0934010Z bucket (dist.GradBucket): Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. 2025-03-17T18:45:23.0934264Z Note that since DDP comm hook only supports single process single device mode, 2025-03-17T18:45:23.0934434Z only exactly one tensor is stored in this bucket. 2025-03-17T18:45:23.0934520Z 2025-03-17T18:45:23.0934619Z Returns: 2025-03-17T18:45:23.0934870Z Future handler of the communication, which updates the gradients in place. 2025-03-17T18:45:23.0934966Z 2025-03-17T18:45:23.0935064Z Example:: 2025-03-17T18:45:23.0935177Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0935438Z >>> state = PostLocalSGDState(process_group=process_group, subgroup=subgroup, 2025-03-17T18:45:23.0935574Z start_localSGD_iter=10) 2025-03-17T18:45:23.0935751Z >>> ddp_model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:23.0936151Z >>> # Also need to establish a model averaging module and run model averaging after ``optimizer.step()``. 2025-03-17T18:45:23.0936507Z >>> # Please refer to the examples in ``torch.distributed.algorithms.model_averaging.averagers`` module. 2025-03-17T18:45:23.0936603Z 2025-03-17T18:45:23.0937017Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0937116Z 2025-03-17T18:45:23.0937223Z warnings.warn(msg) 2025-03-17T18:45:23.0937308Z 2025-03-17T18:45:23.0937515Z --- Parse Warning: 49 / 116 --- 2025-03-17T18:45:23.0938615Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=powerSGD_hook in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/ddp_comm_hooks/powerSGD_hook.py line=342. 2025-03-17T18:45:23.0938884Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0938981Z 2025-03-17T18:45:23.0939098Z Implement PowerSGD algorithm. 2025-03-17T18:45:23.0939185Z 2025-03-17T18:45:23.0939529Z This DDP communication hook implements PowerSGD gradient compression 2025-03-17T18:45:23.0939810Z algorithm described in the `paper `_. 2025-03-17T18:45:23.0940184Z Once gradient tensors are aggregated across all workers, this hook applies 2025-03-17T18:45:23.0940324Z compression as follows: 2025-03-17T18:45:23.0940577Z 2025-03-17T18:45:23.0941110Z 1. Views the input flattened 1D gradient tensor as a list of per-parameter tensors, and divides all the tensors into two groups: 2025-03-17T18:45:23.0955079Z 2025-03-17T18:45:23.0955577Z 1.1 The tensors that should be compressed before allreduce, because the compression can give enough saving in bandwidth. 2025-03-17T18:45:23.0955658Z 2025-03-17T18:45:23.0956086Z 1.2 Rest of the tensors will be directly allreduced without compression, including all the vector tensors (for biases). 2025-03-17T18:45:23.0956166Z 2025-03-17T18:45:23.0956274Z 2. Handles uncompressed tensors: 2025-03-17T18:45:23.0956362Z 2025-03-17T18:45:23.0956887Z 2.1. Allocate contiguous memory for those uncompressed tensors, and allreduces all the uncompressed tensors as a batch, without compression; 2025-03-17T18:45:23.0957118Z 2025-03-17T18:45:23.0957468Z 2.2. Copies the individual uncompressed tensors from the contiguous memory back to the input tensor. 2025-03-17T18:45:23.0957573Z 2025-03-17T18:45:23.0957815Z 3. Handles the tensors that should be compressed by PowerSGD compression: 2025-03-17T18:45:23.0957917Z 2025-03-17T18:45:23.0958165Z 3.1. For each tensor M, creates two low-rank tensors P and Q for decomposing M, 2025-03-17T18:45:23.0958501Z such that M = PQ^T, where Q is initialized from a standard normal distribution and orthogonalized; 2025-03-17T18:45:23.0958587Z 2025-03-17T18:45:23.0958758Z 3.2. Computes each P in Ps, which is equal to MQ; 2025-03-17T18:45:23.0958840Z 2025-03-17T18:45:23.0958965Z 3.3. Allreduces Ps as a batch; 2025-03-17T18:45:23.0959046Z 2025-03-17T18:45:23.0959175Z 3.4. Orthogonalizes each P in Ps; 2025-03-17T18:45:23.0959261Z 2025-03-17T18:45:23.0959479Z 3.5. Computes each Q in Qs, which is approximately equal to M^TP; 2025-03-17T18:45:23.0959561Z 2025-03-17T18:45:23.0959684Z 3.6. Allreduces Qs as a batch; 2025-03-17T18:45:23.0959771Z 2025-03-17T18:45:23.0960090Z 3.7. Computes each M among all the compressed tensors, which is approximately equal to PQ^T. 2025-03-17T18:45:23.0960174Z 2025-03-17T18:45:23.0960608Z Note that this communication hook enforces vanilla allreduce for the first ``state.start_powerSGD_iter`` iterations. 2025-03-17T18:45:23.0960896Z This not only gives the user more control over the tradeoff between speedup and accuracy, 2025-03-17T18:45:23.0961422Z but also helps abstract away some complexity of the internal optimization of DDP for future communication hook developers. 2025-03-17T18:45:23.0961510Z 2025-03-17T18:45:23.0961614Z Args: 2025-03-17T18:45:23.0962058Z state (PowerSGDState): State information to configure the compression rate and support error feedback, warm start, etc. 2025-03-17T18:45:23.0962435Z To tune the compression configs, mainly need to tune ``matrix_approximation_rank``, ``start_powerSGD_iter`` 2025-03-17T18:45:23.0962552Z and ``min_compression_rate``. 2025-03-17T18:45:23.0962987Z bucket (dist.GradBucket): Bucket that stores a 1D flattened gradient tensor that batches multiple per-variable tensors. 2025-03-17T18:45:23.0963243Z Note that since DDP comm hook only supports single process single device mode, 2025-03-17T18:45:23.0963412Z only exactly one tensor is stored in this bucket. 2025-03-17T18:45:23.0963498Z 2025-03-17T18:45:23.0963599Z Returns: 2025-03-17T18:45:23.0963853Z Future handler of the communication, which updates the gradients in place. 2025-03-17T18:45:23.0963949Z 2025-03-17T18:45:23.0964049Z Example:: 2025-03-17T18:45:23.0964155Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.0964450Z >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, 2025-03-17T18:45:23.0964615Z start_powerSGD_iter=10, min_compression_rate=0.5) 2025-03-17T18:45:23.0964789Z >>> ddp_model.register_comm_hook(state, powerSGD_hook) 2025-03-17T18:45:23.0964873Z 2025-03-17T18:45:23.0965172Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0965257Z 2025-03-17T18:45:23.0965372Z warnings.warn(msg) 2025-03-17T18:45:23.0965456Z 2025-03-17T18:45:23.0965719Z --- Parse Warning: 50 / 116 --- 2025-03-17T18:45:23.0966849Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=PeriodicModelAverager in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/model_averaging/averagers.py line=38. 2025-03-17T18:45:23.0967128Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0967255Z 2025-03-17T18:45:23.0967463Z Averages parameters periodically after the warm-up stage. 2025-03-17T18:45:23.0967549Z 2025-03-17T18:45:23.0967832Z This can be used for running `post-local SGD `_, 2025-03-17T18:45:23.0968030Z by running :class:`~torch.nn.DistributedDataParallel` (DDP) 2025-03-17T18:45:23.0968286Z using the subgroups created by :meth:`~torch.distributed.new_subgroups`. 2025-03-17T18:45:23.0968372Z 2025-03-17T18:45:23.0968471Z Args: 2025-03-17T18:45:23.0968643Z period (int): The number of steps per model averaging. 2025-03-17T18:45:23.0968933Z Usually the period should be greater than ``1`` to reduce the communication cost. 2025-03-17T18:45:23.0969067Z Otherwise, only DDP needs to be used. 2025-03-17T18:45:23.0969291Z warmup_steps (int): The number of warm-up steps. During this stage, 2025-03-17T18:45:23.0969417Z model averaging is skipped. 2025-03-17T18:45:23.0969619Z process_group: The process group to be used for all-reduce. 2025-03-17T18:45:23.0969764Z If ``None``, the default process group, which 2025-03-17T18:45:23.0969970Z is created by :func:`torch.distributed.init_process_group`, 2025-03-17T18:45:23.0970097Z will be used. (default: ``None``) 2025-03-17T18:45:23.0970193Z 2025-03-17T18:45:23.0970288Z Example:: 2025-03-17T18:45:23.0970386Z 2025-03-17T18:45:23.0970519Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.0970626Z >>> import torch 2025-03-17T18:45:23.0970805Z >>> import torch.distributed as dist 2025-03-17T18:45:23.0971142Z >>> import torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook as post_localSGD 2025-03-17T18:45:23.0971419Z >>> import torch.distributed.algorithms.model_averaging.averagers as averagers 2025-03-17T18:45:23.0971536Z >>> import torch.nn as nn 2025-03-17T18:45:23.0971626Z >>> 2025-03-17T18:45:23.0971819Z >>> dist.init_process_group("nccl", rank=rank, world_size=16) 2025-03-17T18:45:23.0971934Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:23.0972079Z >>> module = nn.Linear(1, 1, bias=False).cuda() 2025-03-17T18:45:23.0972240Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:23.0972396Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:23.0972485Z >>> ) 2025-03-17T18:45:23.0972646Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:23.0972953Z >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100) 2025-03-17T18:45:23.0973132Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:23.0973222Z >>> 2025-03-17T18:45:23.0973502Z >>> # In the first 100 steps, run global gradient averaging like normal DDP at every step. 2025-03-17T18:45:23.0973664Z >>> # After 100 steps, run model averaging every 4 steps. 2025-03-17T18:45:23.0973995Z >>> # Note that ``warmup_steps`` must be the same as ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:23.0974249Z >>> averager = averagers.PeriodicModelAverager(period=4, warmup_steps=100) 2025-03-17T18:45:23.0974396Z >>> for step in range(0, 200): 2025-03-17T18:45:23.0974510Z >>> optimizer.zero_grad() 2025-03-17T18:45:23.0974634Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:23.0974739Z >>> loss.backward() 2025-03-17T18:45:23.0974855Z >>> optimizer.step() 2025-03-17T18:45:23.0975057Z >>> # Will average model parameters globally every 4 steps. Thus, 2025-03-17T18:45:23.0975277Z >>> # inter-node communication only occurs every 4 iterations after 2025-03-17T18:45:23.0975409Z >>> # the initial ``warmup_steps`` period. 2025-03-17T18:45:23.0975580Z >>> averager.average_parameters(model.parameters()) 2025-03-17T18:45:23.0975694Z 2025-03-17T18:45:23.0975962Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0976049Z 2025-03-17T18:45:23.0976160Z warnings.warn(msg) 2025-03-17T18:45:23.0976246Z 2025-03-17T18:45:23.0976452Z --- Parse Warning: 51 / 116 --- 2025-03-17T18:45:23.0977699Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=HierarchicalModelAverager in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/algorithms/model_averaging/hierarchical_model_averager.py line=19. 2025-03-17T18:45:23.0977984Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0978074Z 2025-03-17T18:45:23.0978431Z Runs hierarchical model averaging (`hierarchical SGD `_). 2025-03-17T18:45:23.0978517Z 2025-03-17T18:45:23.0978845Z Process groups of different sizes are organized in a hierarchy, and they average parameters 2025-03-17T18:45:23.0979055Z by using different periods concurrently after the warm-up stage. 2025-03-17T18:45:23.0979480Z This is an extension of :class:`~torch.distributed.algorithms.model_averaging.averagers.PeriodicModelAverager` 2025-03-17T18:45:23.0979825Z that supports `post-local SGD `_, which essentially only supports 2025-03-17T18:45:23.0980148Z a two-level hierarchy: the intra-machine level and the global level, where the intra-machine 2025-03-17T18:45:23.0980559Z level is usually embedded in :meth:`~torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook`. 2025-03-17T18:45:23.0980879Z Similarly, the process groups within this class do not have such an intra-machine process 2025-03-17T18:45:23.0981160Z subgroup, which should be embedded by the post-local SGD communication hook instead. 2025-03-17T18:45:23.0981259Z 2025-03-17T18:45:23.0981350Z Args: 2025-03-17T18:45:23.0981627Z period_group_size_dict: An ordered dict mapping keys of model averaging period to 2025-03-17T18:45:23.0981832Z process group size, used for initializing process groups of 2025-03-17T18:45:23.0982073Z different sizes in a hierarchy to average parameters concurrently. 2025-03-17T18:45:23.0982293Z Particularly, at each iteration, there will be at most a single 2025-03-17T18:45:23.0982539Z process group that runs averaging -- the period of such group should 2025-03-17T18:45:23.0982759Z have the largest period which the current step can be divided by. 2025-03-17T18:45:23.0982938Z For example, if the dict has three keys: 2, 4, and 8, 2025-03-17T18:45:23.0983147Z then this means totally three process groups will be created to 2025-03-17T18:45:23.0983374Z average parameters every 2, 4, and 8 iterations, respectively. 2025-03-17T18:45:23.0983566Z At the 4th iteration, only the second process group will run 2025-03-17T18:45:23.0983761Z averaging, because the first process group should be a 2025-03-17T18:45:23.0984013Z subset of the second process group, and no need to execute the first 2025-03-17T18:45:23.0984158Z process group redundantly. 2025-03-17T18:45:23.0984362Z On the other hand, the third process group can only be triggered 2025-03-17T18:45:23.0984609Z every 8 iterations, so it will not be triggered at the 4th iteration. 2025-03-17T18:45:23.0984921Z warmup_steps (int): The number of warm-up steps. During this stage, model averaging is skipped. 2025-03-17T18:45:23.0985384Z process_group (ProcessGroup, optional): The overall process group containing all the processes that runs model averaging. 2025-03-17T18:45:23.0985586Z If ``None``, the default process group, which is created 2025-03-17T18:45:23.0985810Z by :func:`torch.distributed.init_process_group`, will be used. 2025-03-17T18:45:23.0985937Z (default: ``None``) 2025-03-17T18:45:23.0986034Z 2025-03-17T18:45:23.0986127Z Example:: 2025-03-17T18:45:23.0986263Z >>> # xdoctest: +SKIP('undefined rank') 2025-03-17T18:45:23.0986391Z >>> from collections import OrderedDict 2025-03-17T18:45:23.0986605Z >>> import torch 2025-03-17T18:45:23.0986731Z >>> import torch.distributed as dist 2025-03-17T18:45:23.0987025Z >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( 2025-03-17T18:45:23.0987134Z >>> PostLocalSGDState, 2025-03-17T18:45:23.0987253Z >>> post_localSGD_hook, 2025-03-17T18:45:23.0987338Z >>> ) 2025-03-17T18:45:23.0987729Z >>> import torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD 2025-03-17T18:45:23.0987834Z >>> import torch.nn as nn 2025-03-17T18:45:23.0987935Z >>> 2025-03-17T18:45:23.0988119Z >>> dist.init_process_group("nccl", rank=rank, world_size=16) 2025-03-17T18:45:23.0988247Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:23.0988389Z >>> module = nn.Linear(1, 1, bias=False).to(rank) 2025-03-17T18:45:23.0988559Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:23.0988771Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:23.0988873Z >>> ) 2025-03-17T18:45:23.0989022Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:23.0989315Z >>> # Assume that each machine has 4 GPUs, then each intra-machine subgroup has a size of 4. 2025-03-17T18:45:23.0989442Z >>> subgroup, _ = dist.new_subgroups() 2025-03-17T18:45:23.0989771Z >>> state = PostLocalSGDState(process_group=None, subgroup=subgroup, start_localSGD_iter=100) 2025-03-17T18:45:23.0989937Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:23.0990038Z >>> 2025-03-17T18:45:23.0990326Z >>> # Average parameters among each group of 8 processes every 4 iterations, and among all 2025-03-17T18:45:23.0990463Z >>> # the 16 processes every 16 iterations. 2025-03-17T18:45:23.0990654Z >>> averager = hierarchicalSGD.HierarchicalModelAverager( 2025-03-17T18:45:23.0990911Z >>> period_group_size_dict=OrderedDict([(4, 8), (16, 16)]), warmup_steps=100) 2025-03-17T18:45:23.0991233Z >>> # Note that ``warmup_steps`` must be the same as ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:23.0991514Z >>> # In the first 100 steps, run global gradient averaging like normal DDP at every step. 2025-03-17T18:45:23.0991676Z >>> # After 100 steps, run model averaging at two levels. 2025-03-17T18:45:23.0991799Z >>> for step in range(0, 200): 2025-03-17T18:45:23.0991908Z >>> optimizer.zero_grad() 2025-03-17T18:45:23.0992034Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:23.0992167Z >>> loss.backward() 2025-03-17T18:45:23.0992284Z >>> optimizer.step() 2025-03-17T18:45:23.0992447Z >>> # Average parameters after ``optimizer.step()``. 2025-03-17T18:45:23.0992749Z >>> # Thus, the inter-node communication only occurs periodically after ``warmup_steps``. 2025-03-17T18:45:23.0992919Z >>> averager.average_parameters(model.parameters()) 2025-03-17T18:45:23.0993016Z 2025-03-17T18:45:23.0993114Z .. warning :: 2025-03-17T18:45:23.0993385Z The last group size in the dict must be the size of the provided ``process_group``, 2025-03-17T18:45:23.0993623Z which indicates model averaging at the highest level of the hierarchy. 2025-03-17T18:45:23.0993969Z If ``process_group`` is not provided, then the last group size should be equal to the world size. 2025-03-17T18:45:23.0994055Z 2025-03-17T18:45:23.0994161Z .. warning :: 2025-03-17T18:45:23.0994400Z `HierarchicalModelAverager` is experimental and subject to change. 2025-03-17T18:45:23.0994497Z 2025-03-17T18:45:23.0994757Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0994852Z 2025-03-17T18:45:23.0994955Z warnings.warn(msg) 2025-03-17T18:45:23.0995039Z 2025-03-17T18:45:23.0995266Z --- Parse Warning: 52 / 116 --- 2025-03-17T18:45:23.0996363Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=BroadcastingTorchSaveReader in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/format_utils.py line=40. 2025-03-17T18:45:23.0996648Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.0996744Z 2025-03-17T18:45:23.0997044Z StorageReader for reading a Torch Save file. This reader will read the entire checkpoint 2025-03-17T18:45:23.0997305Z on the coordinator rank, and then broadcast and shard each tensor to all ranks. 2025-03-17T18:45:23.0997393Z 2025-03-17T18:45:23.0997557Z . N.B. Intended to be used with DynamicMetaLoadPlanner 2025-03-17T18:45:23.0997653Z 2025-03-17T18:45:23.0997745Z .. warning:: 2025-03-17T18:45:23.0997937Z Current implementation only supports loading Tensors. 2025-03-17T18:45:23.0998020Z 2025-03-17T18:45:23.0998201Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.0998304Z >>> sd = {"mode": model} 2025-03-17T18:45:23.0998406Z >>> dcp.load( 2025-03-17T18:45:23.0998493Z >>> sd, 2025-03-17T18:45:23.0998664Z >>> storage_reader=BroadcastingTorchSaveReader(), 2025-03-17T18:45:23.0998794Z >>> planner=DynamicMetaLoadPlanner(), 2025-03-17T18:45:23.0998928Z >>> checkpoint_id="path_to_model.pt" 2025-03-17T18:45:23.0999014Z >>> ) 2025-03-17T18:45:23.0999109Z 2025-03-17T18:45:23.0999368Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.0999467Z 2025-03-17T18:45:23.0999568Z warnings.warn(msg) 2025-03-17T18:45:23.0999669Z 2025-03-17T18:45:23.0999860Z --- Parse Warning: 53 / 116 --- 2025-03-17T18:45:23.1000931Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DynamicMetaLoadPlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/format_utils.py line=151. 2025-03-17T18:45:23.1001198Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1001296Z 2025-03-17T18:45:23.1001668Z Extension of DefaultLoadPlanner, which creates a new Metadata object based on the passed in state dict, 2025-03-17T18:45:23.1002015Z avoiding the need to read metadata from disk. This is useful when reading formats which don't have a 2025-03-17T18:45:23.1002136Z metadata file, like Torch Save files. 2025-03-17T18:45:23.1002232Z 2025-03-17T18:45:23.1002416Z . N.B. Intended to be used with BroadcastingTorchSaveReader 2025-03-17T18:45:23.1002541Z 2025-03-17T18:45:23.1002635Z .. warning:: 2025-03-17T18:45:23.1002820Z Current implementation only supports loading Tensors. 2025-03-17T18:45:23.1002905Z 2025-03-17T18:45:23.1003033Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1003131Z >>> sd = {"mode": model} 2025-03-17T18:45:23.1003235Z >>> dcp.load( 2025-03-17T18:45:23.1003327Z >>> sd, 2025-03-17T18:45:23.1003491Z >>> storage_reader=BroadcastingTorchSaveReader(), 2025-03-17T18:45:23.1003621Z >>> planner=DynamicMetaLoadPlanner(), 2025-03-17T18:45:23.1003738Z >>> checkpoint_id="path_to_model.pt" 2025-03-17T18:45:23.1003863Z >>> ) 2025-03-17T18:45:23.1003950Z 2025-03-17T18:45:23.1004224Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1004305Z 2025-03-17T18:45:23.1004418Z warnings.warn(msg) 2025-03-17T18:45:23.1004501Z 2025-03-17T18:45:23.1004701Z --- Parse Warning: 54 / 116 --- 2025-03-17T18:45:23.1005768Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load_sharded_optimizer_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/optimizer.py line=221. 2025-03-17T18:45:23.1006050Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1006135Z 2025-03-17T18:45:23.1006361Z Load a state_dict in conjunction with FSDP sharded optimizer state. 2025-03-17T18:45:23.1006445Z 2025-03-17T18:45:23.1006627Z This is the current recommended way to checkpoint FSDP. 2025-03-17T18:45:23.1006730Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1006903Z >>> import torch.distributed.checkpoint as dist_cp 2025-03-17T18:45:23.1006990Z >>> # Save 2025-03-17T18:45:23.1007103Z >>> model: torch.nn.Model 2025-03-17T18:45:23.1007225Z >>> optim_params = model.parameters() 2025-03-17T18:45:23.1007385Z >>> optim = torch.optim.SGD(optim_params, lr=0.01) 2025-03-17T18:45:23.1007471Z >>> # Save 2025-03-17T18:45:23.1007704Z >>> with FSDP.state_dict_type(model, StateDictType.SHARDED_STATE_DICT): 2025-03-17T18:45:23.1007802Z >>> state_dict = { 2025-03-17T18:45:23.1007972Z >>> "optimizer": FSDP.optim_state_dict(model, optim), 2025-03-17T18:45:23.1008136Z >>> "model": model.state_dict() 2025-03-17T18:45:23.1008241Z >>> } 2025-03-17T18:45:23.1008348Z >>> dist_cp.save_state_dict( 2025-03-17T18:45:23.1008467Z >>> state_dict=optim_state, 2025-03-17T18:45:23.1008650Z >>> storage_writer=dist_cp.FileSystemWriter("checkpoint"), 2025-03-17T18:45:23.1008796Z >>> planner=dist_cp.DefaultSavePlanner(), 2025-03-17T18:45:23.1008883Z >>> ) 2025-03-17T18:45:23.1008979Z >>> 2025-03-17T18:45:23.1009068Z >>> # Load 2025-03-17T18:45:23.1009309Z >>> with FSDP.state_dict_type(model_tp, StateDictType.SHARDED_STATE_DICT): 2025-03-17T18:45:23.1009445Z >>> model_state_dict = model_tp.state_dict() 2025-03-17T18:45:23.1009556Z >>> checkpoint = { 2025-03-17T18:45:23.1009666Z >>> "model": model_state_dict 2025-03-17T18:45:23.1009764Z >>> } 2025-03-17T18:45:23.1009875Z >>> dist_cp.load_state_dict( 2025-03-17T18:45:23.1009987Z >>> state_dict=checkpoint, 2025-03-17T18:45:23.1010194Z >>> storage_reader=dist_cp.FileSystemReader(checkpoint_file), 2025-03-17T18:45:23.1010331Z >>> planner=dist_cp.DefaultLoadPlanner(), 2025-03-17T18:45:23.1010428Z >>> ) 2025-03-17T18:45:23.1010586Z >>> model.load_state_dict(checkpoint["model_state"]) 2025-03-17T18:45:23.1010682Z >>> 2025-03-17T18:45:23.1010857Z >>> optim_state = dist_cp.load_sharded_optimizer_state_dict( 2025-03-17T18:45:23.1010970Z >>> model_state_dict, 2025-03-17T18:45:23.1011087Z >>> optimizer_key="optimizer", 2025-03-17T18:45:23.1011306Z >>> storage_reader=dist_cp.FileSystemReader("checkpoint"), 2025-03-17T18:45:23.1011392Z >>> ) 2025-03-17T18:45:23.1011486Z >>> 2025-03-17T18:45:23.1011636Z >>> flattened_osd = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:23.1011778Z >>> model, optim, optim_state["optimizer"] 2025-03-17T18:45:23.1011861Z >>> ) 2025-03-17T18:45:23.1011965Z >>> 2025-03-17T18:45:23.1012092Z >>> optim.load_state_dict(flattened_osd) 2025-03-17T18:45:23.1012188Z 2025-03-17T18:45:23.1012446Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1012543Z 2025-03-17T18:45:23.1012671Z warnings.warn(msg) 2025-03-17T18:45:23.1012765Z 2025-03-17T18:45:23.1012956Z --- Parse Warning: 55 / 116 --- 2025-03-17T18:45:23.1013943Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SavePlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/planner.py line=113. 2025-03-17T18:45:23.1014216Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1014314Z 2025-03-17T18:45:23.1014607Z Abstract class defining the protocol used by save_state_dict to plan the save process. 2025-03-17T18:45:23.1014703Z 2025-03-17T18:45:23.1015005Z SavePlanners are stateful objects that can be used to customize the whole save process. 2025-03-17T18:45:23.1015103Z 2025-03-17T18:45:23.1015388Z SavePlanner acts as an access proxy to the state_dict, so any transformation done to it 2025-03-17T18:45:23.1015522Z will be visible to the whole process. 2025-03-17T18:45:23.1015605Z 2025-03-17T18:45:23.1015898Z A planner subclass can expect the following sequence of calls during save_state_dict: 2025-03-17T18:45:23.1015980Z 2025-03-17T18:45:23.1016115Z 1) set_up_planner - called on all ranks. 2025-03-17T18:45:23.1016248Z Signals the start of a checkpoint save. 2025-03-17T18:45:23.1016343Z 2025-03-17T18:45:23.1016472Z 2) create_local_plan - called on all ranks. 2025-03-17T18:45:23.1016775Z Process the state_dict and produces a `SavePlan` that will be sent for global planning. 2025-03-17T18:45:23.1016858Z 2025-03-17T18:45:23.1017103Z 3) create_global_plan - called on the coordinator rank only. 2025-03-17T18:45:23.1017306Z Takes the SavePlan from all ranks and make any global decision. 2025-03-17T18:45:23.1017400Z 2025-03-17T18:45:23.1017516Z 4) finish_plan - called on all ranks. 2025-03-17T18:45:23.1017737Z This gives each rank a chance to adjust to global planning decisions. 2025-03-17T18:45:23.1017832Z 2025-03-17T18:45:23.1017992Z 5) resolve_data - called multiple times on each rank 2025-03-17T18:45:23.1018212Z Lookups a value on the `state_dict` for the storage layer to write. 2025-03-17T18:45:23.1018296Z 2025-03-17T18:45:23.1018624Z Users are recommended to extend DefaultSavePlanner instead of this interface directly as 2025-03-17T18:45:23.1018811Z most changes can be expressed by changes in a single method. 2025-03-17T18:45:23.1018909Z 2025-03-17T18:45:23.1019036Z There are 3 usual patterns of extension: 2025-03-17T18:45:23.1019133Z 2025-03-17T18:45:23.1019396Z Rewriting state_dict. This is the simplest way to extend the save process as it 2025-03-17T18:45:23.1019641Z doesn't requite understanding the intrincacies of how SavePlan works: 2025-03-17T18:45:23.1019723Z 2025-03-17T18:45:23.1019856Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1019994Z >>> class RenamePlanner(DefaultSavePlanner): 2025-03-17T18:45:23.1020114Z >>> def set_up_planner( 2025-03-17T18:45:23.1020204Z >>> self, 2025-03-17T18:45:23.1020326Z >>> state_dict: STATE_DICT_TYPE, 2025-03-17T18:45:23.1020458Z >>> storage_meta: Optional[StorageMeta], 2025-03-17T18:45:23.1020602Z >>> is_coordinator: bool, 2025-03-17T18:45:23.1020699Z >>> ) -> None: 2025-03-17T18:45:23.1020817Z >>> # prefix all keys with `foo_`` 2025-03-17T18:45:23.1021119Z >>> super().set_up_planner({"foo_" + k: v for k, v in state_dict.items()}, storage_meta, is_coordinator) 2025-03-17T18:45:23.1021213Z 2025-03-17T18:45:23.1021558Z Modifying local plan and lookup in tandem. This is useful when fine control of how data is persisted 2025-03-17T18:45:23.1021649Z 2025-03-17T18:45:23.1021769Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1021909Z >>> class FP16Planner(DefaultSavePlanner): 2025-03-17T18:45:23.1022023Z >>> def create_local_plan(self): 2025-03-17T18:45:23.1022196Z >>> plan = super().create_local_plan() 2025-03-17T18:45:23.1022300Z >>> for p in plan: 2025-03-17T18:45:23.1022428Z >>> if p.tensor_data is not None: 2025-03-17T18:45:23.1022593Z >>> p.tensor_data.properties.dtype = torch.float16 2025-03-17T18:45:23.1022704Z >>> return plan 2025-03-17T18:45:23.1022794Z >>> 2025-03-17T18:45:23.1022922Z >>> def resolve_data(self, write_item): 2025-03-17T18:45:23.1023054Z >>> item = super().resolve_data(write_item) 2025-03-17T18:45:23.1023345Z >>> return item if write_item.type == WriteItemType.BYTE_IO else item.to(torch.float16) 2025-03-17T18:45:23.1023437Z 2025-03-17T18:45:23.1023795Z Using the global planning step to make central decisions that can't be made individually by each rank 2025-03-17T18:45:23.1023881Z 2025-03-17T18:45:23.1024008Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1024129Z >>> from itertools import zip_longest 2025-03-17T18:45:23.1024252Z >>> from dataclasses import replace 2025-03-17T18:45:23.1024425Z >>> class DDPLoadBalancingPlanner(DefaultSavePlanner): 2025-03-17T18:45:23.1024716Z >>> # This uses the default local plan behavior of having all non-sharded writes in rank 0 2025-03-17T18:45:23.1024855Z >>> # This sample doesn't handle ShardedTensors 2025-03-17T18:45:23.1024999Z >>> def create_global_plan(self, all_plans): 2025-03-17T18:45:23.1025153Z >>> iters = [iter(all_plans[0].items)] * len(all_plans) 2025-03-17T18:45:23.1025261Z >>> items_per_rank = [ 2025-03-17T18:45:23.1025454Z >>> [item for item in items if item is not None] 2025-03-17T18:45:23.1025616Z >>> for items in zip(*zip_longest(*iters), strict=True) 2025-03-17T18:45:23.1025716Z >>> ] 2025-03-17T18:45:23.1025815Z >>> all_plans = [ 2025-03-17T18:45:23.1025943Z >>> replace(plan, items=items) 2025-03-17T18:45:23.1026140Z >>> for plan, items in zip(all_plans, items_per_rank, strict=True) 2025-03-17T18:45:23.1026236Z >>> ] 2025-03-17T18:45:23.1026382Z >>> return super().create_global_plan(all_plans) 2025-03-17T18:45:23.1026596Z 2025-03-17T18:45:23.1026877Z Finally, some planners need to save additional metadata in the checkpoint, this is 2025-03-17T18:45:23.1027165Z accomplished by having each rank contribute their data items in the local plan and 2025-03-17T18:45:23.1027282Z the global planner aggregate them: 2025-03-17T18:45:23.1027373Z 2025-03-17T18:45:23.1027493Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1027666Z >>> class SaveExtraDataPlanner(DefaultSavePlanner): 2025-03-17T18:45:23.1027800Z >>> def create_local_plan(self) -> SavePlan: 2025-03-17T18:45:23.1027931Z >>> plan = super().create_local_plan() 2025-03-17T18:45:23.1028099Z >>> return replace(plan, planner_data="per-rank-data") 2025-03-17T18:45:23.1028199Z >>> 2025-03-17T18:45:23.1028505Z >>> def create_global_plan(self, all_plans: List[SavePlan]) -> Tuple[List[SavePlan], Metadata]: 2025-03-17T18:45:23.1028712Z >>> global_plan, metadata = super().create_global_plan(all_plans) 2025-03-17T18:45:23.1028908Z >>> merged_data = [p.planner_data for p in global_plan] 2025-03-17T18:45:23.1029093Z >>> metadata = replace(metadata, planner_data=merged_data) 2025-03-17T18:45:23.1029209Z >>> return global_plan, metadata 2025-03-17T18:45:23.1029304Z 2025-03-17T18:45:23.1029561Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1029659Z 2025-03-17T18:45:23.1029761Z warnings.warn(msg) 2025-03-17T18:45:23.1029854Z 2025-03-17T18:45:23.1030064Z --- Parse Warning: 56 / 116 --- 2025-03-17T18:45:23.1031051Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=LoadPlanner in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/planner.py line=293. 2025-03-17T18:45:23.1031349Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1031442Z 2025-03-17T18:45:23.1031739Z Abstract class defining the protocol used by load_state_dict to plan the load process. 2025-03-17T18:45:23.1031832Z 2025-03-17T18:45:23.1032129Z LoadPlanner are stateful objects that can be used to customize the whole load process. 2025-03-17T18:45:23.1032220Z 2025-03-17T18:45:23.1032505Z LoadPlanner acts as an access proxy to the state_dict, so any transformation done to it 2025-03-17T18:45:23.1032634Z will be visible to the whole process. 2025-03-17T18:45:23.1032719Z 2025-03-17T18:45:23.1033010Z A planner subclass can expect the following sequence of calls during load_state_dict: 2025-03-17T18:45:23.1033095Z 2025-03-17T18:45:23.1033231Z 1) set_up_planner - called on all ranks. 2025-03-17T18:45:23.1033370Z Signals the start of loading a checkpoint. 2025-03-17T18:45:23.1033460Z 2025-03-17T18:45:23.1033590Z 2) create_local_plan - called on all ranks. 2025-03-17T18:45:23.1033884Z Process the state_dict and produces a `LoadPlan` that will be sent for global planning. 2025-03-17T18:45:23.1033971Z 2025-03-17T18:45:23.1034164Z 3) create_global_plan - called on the coordinator rank only. 2025-03-17T18:45:23.1034366Z Takes the LoadPlan from all ranks and make any global decision. 2025-03-17T18:45:23.1034459Z 2025-03-17T18:45:23.1034611Z 4) load_bytes - called multiple times on each rank 2025-03-17T18:45:23.1034840Z This is called once per non-tensor value in state_dict. 2025-03-17T18:45:23.1034927Z 2025-03-17T18:45:23.1035154Z 5) resolve_tensor and commit_tensor - called multiple times on each rank 2025-03-17T18:45:23.1035352Z They are called in pair for each Tensor value in state_dict. 2025-03-17T18:45:23.1035441Z 2025-03-17T18:45:23.1035761Z Users are recommended to extend DefaultLoadPlanner instead of this interface directly as 2025-03-17T18:45:23.1035948Z most changes can be expressed by changes in a single method. 2025-03-17T18:45:23.1036043Z 2025-03-17T18:45:23.1036179Z There are two usual patterns of extension: 2025-03-17T18:45:23.1036274Z 2025-03-17T18:45:23.1036535Z Rewriting state_dict. This is the simplest way to extend the load process as it 2025-03-17T18:45:23.1036983Z doesn't requite understanding the intrincacies of how LoadPlan works. We need 2025-03-17T18:45:23.1037227Z to keep a reference to the original state_dict as load happens in place so 2025-03-17T18:45:23.1037368Z we need to be able to perform it in place 2025-03-17T18:45:23.1037451Z 2025-03-17T18:45:23.1037582Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1037721Z >>> class RenamePlanner(DefaultLoadPlanner): 2025-03-17T18:45:23.1037839Z >>> def set_up_planner( 2025-03-17T18:45:23.1037931Z >>> self, 2025-03-17T18:45:23.1038057Z >>> state_dict: STATE_DICT_TYPE, 2025-03-17T18:45:23.1038164Z >>> metadata: Metadata, 2025-03-17T18:45:23.1038285Z >>> is_coordinator: bool, 2025-03-17T18:45:23.1038436Z >>> ) -> None: 2025-03-17T18:45:23.1038574Z >>> self.original_state_dict = state_dict 2025-03-17T18:45:23.1038754Z >>> state_dict = {"foo_" + k: v for k, v in state_dict.items()} 2025-03-17T18:45:23.1038851Z >>> 2025-03-17T18:45:23.1038974Z >>> if self.flatten_sharded_tensors: 2025-03-17T18:45:23.1039142Z >>> state_dict = _flatten_sharded_tensors(state_dict) 2025-03-17T18:45:23.1039231Z >>> 2025-03-17T18:45:23.1039355Z >>> if self.flatten_state_dict: 2025-03-17T18:45:23.1039545Z >>> state_dict, self.mappings = flatten_state_dict(state_dict) 2025-03-17T18:45:23.1039639Z >>> 2025-03-17T18:45:23.1039799Z >>> self.state_dict = state_dict 2025-03-17T18:45:23.1039917Z >>> self.metadata = metadata 2025-03-17T18:45:23.1040043Z >>> self.is_coordinator = is_coordinator 2025-03-17T18:45:23.1040138Z >>> 2025-03-17T18:45:23.1040263Z >>> def load_bytes(self, read_item, value): 2025-03-17T18:45:23.1040385Z >>> # Remove the "foo_" prefix 2025-03-17T18:45:23.1040715Z >>> self.original_state_dict[read_item.dest_index.fqn[4:]] = torch.load(value, weights_only=False) 2025-03-17T18:45:23.1040812Z 2025-03-17T18:45:23.1040895Z 2025-03-17T18:45:23.1041169Z Modifying resolve_tensor and commit_tensor to handle load time transformation. 2025-03-17T18:45:23.1041257Z 2025-03-17T18:45:23.1041377Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1041550Z >>> class MetaModelMaterialize(DefaultSavePlanner): 2025-03-17T18:45:23.1041674Z >>> def resolve_tensor(self, read_item): 2025-03-17T18:45:23.1041817Z >>> tensor = super().resolve_tensor(read_item) 2025-03-17T18:45:23.1041972Z >>> return torch.empty_like(tensor, device="cpu") 2025-03-17T18:45:23.1042064Z >>> 2025-03-17T18:45:23.1042199Z >>> def commit_tensor(self, read_item, tensor): 2025-03-17T18:45:23.1042368Z >>> self.state_dict[read_item.dest_index.fqn] = tensor 2025-03-17T18:45:23.1042455Z 2025-03-17T18:45:23.1042722Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1042807Z 2025-03-17T18:45:23.1042918Z warnings.warn(msg) 2025-03-17T18:45:23.1043003Z 2025-03-17T18:45:23.1043223Z --- Parse Warning: 57 / 116 --- 2025-03-17T18:45:23.1044288Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=get_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict.py line=1124. 2025-03-17T18:45:23.1044568Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1044656Z 2025-03-17T18:45:23.1044835Z Return the model state_dict and optimizers state_dict. 2025-03-17T18:45:23.1044920Z 2025-03-17T18:45:23.1045155Z ``get_state_dict`` can process any module that is parallelized by PyTorch 2025-03-17T18:45:23.1045417Z FSDP/fully_shard, DDP/replicate, tensor_parallel/parallelize_module, and any 2025-03-17T18:45:23.1045682Z combination of these parallelisms. The main functions of ``get_state_dict`` 2025-03-17T18:45:23.1045906Z are: 1.) returning a model and optimizer state_dict that can be resharded 2025-03-17T18:45:23.1046132Z with a different number of trainers and/or different parallelisms. 2025-03-17T18:45:23.1046390Z 2.) hiding the parallelism-specific state_dict APIs. Users don't have to call 2025-03-17T18:45:23.1046493Z these APIs. 2025-03-17T18:45:23.1046623Z 3.) sanity checking the result state_dict. 2025-03-17T18:45:23.1046716Z 2025-03-17T18:45:23.1046933Z The keys of the result state dictionary are the canonical FQNs (Fully 2025-03-17T18:45:23.1047184Z Qualified Names). A canonical FQN refers to the FQN based on a parameter's 2025-03-17T18:45:23.1047428Z position in an nn.Module hierarchy. More specifically, a canonical FQN to a 2025-03-17T18:45:23.1047676Z parameter is the FQN returned by ``module.named_parameters()`` or 2025-03-17T18:45:23.1047890Z ``module.named_buffers()`` when the module is not distributed by any 2025-03-17T18:45:23.1048159Z parallelisms. Since the optimizer internally uses parameter IDs to represent 2025-03-17T18:45:23.1048380Z a parameter, there will be a conversion from the parameter IDs to the 2025-03-17T18:45:23.1048516Z canonical FQNs when calling this API. 2025-03-17T18:45:23.1048601Z 2025-03-17T18:45:23.1048840Z ``get_state_dict`` can also process a module that is not parallelized. In 2025-03-17T18:45:23.1049069Z such a case, ``get_state_dict`` only performs one function -- converting the 2025-03-17T18:45:23.1049252Z optimizer parameter IDs to the canonical FQNs. 2025-03-17T18:45:23.1049340Z 2025-03-17T18:45:23.1049441Z Example: 2025-03-17T18:45:23.1049545Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1049652Z >>> import torch 2025-03-17T18:45:23.1049900Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:23.1050116Z >>> from torch.nn.parallel import DistributedDataParallel as DDP 2025-03-17T18:45:23.1050346Z >>> from torch.distributed.checkpoint.state_dict import get_state_dict 2025-03-17T18:45:23.1050442Z 2025-03-17T18:45:23.1050575Z >>> fsdp_model = FSDP(copy.deepcopy(model)) 2025-03-17T18:45:23.1050780Z >>> fsdp_optim = torch.optim.Adam(model.parameters(), lr=1e-3) 2025-03-17T18:45:23.1050907Z >>> ddp_model = DDP(copy.deepcopy(model)) 2025-03-17T18:45:23.1051099Z >>> ddp_optim = torch.optim.Adam(model.parameters(), lr=1e-3) 2025-03-17T18:45:23.1051186Z 2025-03-17T18:45:23.1051280Z 2025-03-17T18:45:23.1051524Z >>> ddp_state_dict, ddp_optim_state_dict = get_state_dict(ddp_model, ddp_optim) 2025-03-17T18:45:23.1051759Z >>> fsdp_state_dict, fsdp_optim_state_dict = get_state_dict( 2025-03-17T18:45:23.1051871Z ... fsdp_model, fsdp_optim 2025-03-17T18:45:23.1051970Z ... ) 2025-03-17T18:45:23.1052058Z 2025-03-17T18:45:23.1052284Z >>> # if we simply call ddp_model.state_dict() and fsdp_model.state_dict(), 2025-03-17T18:45:23.1052392Z >>> # the asserts will fail. 2025-03-17T18:45:23.1052526Z >>> assert ddp_state_dict == fsdp_state_dict 2025-03-17T18:45:23.1052731Z >>> assert ddp_optim_state == fsdp_optim_state_dict 2025-03-17T18:45:23.1052819Z 2025-03-17T18:45:23.1052918Z 2025-03-17T18:45:23.1053008Z Args: 2025-03-17T18:45:23.1053162Z model (nn.Module): the nn.Module to the model. 2025-03-17T18:45:23.1053355Z optimizers (Union[None, Optimizer, Iterable[Optimizer]]): 2025-03-17T18:45:23.1053530Z The optimizers that are used to optimize ``model``. 2025-03-17T18:45:23.1053816Z submodules (deprecated): Optional[set[nn.Module]]: only return the model parameters 2025-03-17T18:45:23.1053940Z that belong to the submodules. 2025-03-17T18:45:23.1054122Z options (StateDictOptions): the options to control how 2025-03-17T18:45:23.1054343Z model state_dict and optimizer state_dict should be returned. See 2025-03-17T18:45:23.1054471Z `StateDictOptions` for the details. 2025-03-17T18:45:23.1054564Z 2025-03-17T18:45:23.1054653Z Returns: 2025-03-17T18:45:23.1054863Z ``Tuple`` that contain model state_dict and optimizer state_dict. 2025-03-17T18:45:23.1054948Z 2025-03-17T18:45:23.1055189Z :rtype: typing.Tuple[typing.Dict[str, ValueType], OptimizerStateType] 2025-03-17T18:45:23.1055275Z 2025-03-17T18:45:23.1055540Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1055628Z 2025-03-17T18:45:23.1055735Z warnings.warn(msg) 2025-03-17T18:45:23.1055817Z 2025-03-17T18:45:23.1056021Z --- Parse Warning: 58 / 116 --- 2025-03-17T18:45:23.1057006Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_loader.py line=62. 2025-03-17T18:45:23.1057327Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1057412Z 2025-03-17T18:45:23.1057621Z Load a checkpoint into a distributed state dict in SPMD style. 2025-03-17T18:45:23.1057710Z 2025-03-17T18:45:23.1057941Z Each rank must have the same keys in their ``state_dict`` provided to this 2025-03-17T18:45:23.1058179Z API. Mismatched keys may result in hangs or errors. If unsure, you can use 2025-03-17T18:45:23.1058409Z the ``utils._assert_same_keys`` API to check (but may incur communication 2025-03-17T18:45:23.1058526Z costs). 2025-03-17T18:45:23.1058617Z 2025-03-17T18:45:23.1058803Z Each rank will try to read the least amount of data necessary 2025-03-17T18:45:23.1059048Z to fulfill the requested `state_dict`. When loading :class:`ShardedTensor` 2025-03-17T18:45:23.1059304Z or :class:`DTensor` instances, each rank only reads data for their local shards. 2025-03-17T18:45:23.1059398Z 2025-03-17T18:45:23.1059662Z For each ``Stateful`` object (having both a ``state_dict`` and a ``load_state_dict``), 2025-03-17T18:45:23.1059933Z load will first call ``state_dict`` before attempting deserialization, followed by 2025-03-17T18:45:23.1060109Z ``load_state_dict`` once the deserialization is complete. 2025-03-17T18:45:23.1060381Z For each non-``Stateful`` object, load will deserailize the object, and then replace 2025-03-17T18:45:23.1060535Z it in the ``state_dict`` with the deserialized object. 2025-03-17T18:45:23.1060630Z 2025-03-17T18:45:23.1060726Z .. warning:: 2025-03-17T18:45:23.1060910Z All tensors in ``state_dict`` must be allocated on their 2025-03-17T18:45:23.1061078Z destination device *prior to* calling this function. 2025-03-17T18:45:23.1061170Z 2025-03-17T18:45:23.1061405Z All non-tensor data is loaded using `torch.load()` and modified in place 2025-03-17T18:45:23.1061507Z on state_dict. 2025-03-17T18:45:23.1061591Z 2025-03-17T18:45:23.1061686Z .. warning:: 2025-03-17T18:45:23.1061894Z Users must call `load_state_dict` on the root module to ensure load 2025-03-17T18:45:23.1062090Z pos-processing and non-tensor data properly propagates. 2025-03-17T18:45:23.1062220Z 2025-03-17T18:45:23.1062309Z .. note: 2025-03-17T18:45:23.1062545Z If no process group is initialized, this function will assume the intent 2025-03-17T18:45:23.1062776Z is to load a checkpoint into the local process. This can be useful in the 2025-03-17T18:45:23.1063040Z case of local inference, and when using regular Tensors (as opposed to DTensor 2025-03-17T18:45:23.1063143Z or ShardedTensor) 2025-03-17T18:45:23.1063235Z 2025-03-17T18:45:23.1063321Z .. note: 2025-03-17T18:45:23.1063475Z Rank 0 is assumed to be the coordinator rank. 2025-03-17T18:45:23.1063561Z 2025-03-17T18:45:23.1063657Z Args: 2025-03-17T18:45:23.1063875Z state_dict (Dict[str, Any]): The state_dict to load the checkpoint into. 2025-03-17T18:45:23.1064032Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:23.1064247Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:23.1064469Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:23.1064642Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:23.1064755Z (Default: ``None``) 2025-03-17T18:45:23.1064891Z storage_reader (Optional[StorageReader]): 2025-03-17T18:45:23.1065112Z Instance of StorageWriter used to perform reads. If this is not 2025-03-17T18:45:23.1065319Z specified, DCP will automatically infer the reader based on the 2025-03-17T18:45:23.1065532Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:23.1065672Z be raised. (Default: ``None``) 2025-03-17T18:45:23.1065803Z planner (Optional[LoadPlanner]): 2025-03-17T18:45:23.1066009Z Instance of LoadPlanner. If this is not specificed, the default 2025-03-17T18:45:23.1066149Z planner will be used. (Default: ``None``) 2025-03-17T18:45:23.1066286Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:23.1066585Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:23.1066688Z (Default: ``None``) 2025-03-17T18:45:23.1066917Z no_dist (bool): If ``True``, this function will assume the intent is to load 2025-03-17T18:45:23.1067210Z a checkpoint without using cross-rank synchronization. (Default: ``False``) 2025-03-17T18:45:23.1067311Z Returns: 2025-03-17T18:45:23.1067399Z None. 2025-03-17T18:45:23.1067494Z 2025-03-17T18:45:23.1067585Z Examples 2025-03-17T18:45:23.1067704Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1067808Z >>> my_model = MyModule() 2025-03-17T18:45:23.1067961Z >>> optimizer = Adagrad(my_model.parameters()) 2025-03-17T18:45:23.1068095Z >>> model_state_dict = my_model.state_dict() 2025-03-17T18:45:23.1068341Z >>> fs_storage_reader = torch.distributed.checkpoint.FileSystemReader( 2025-03-17T18:45:23.1068452Z ... "/checkpoint/1" 2025-03-17T18:45:23.1068546Z ... ) 2025-03-17T18:45:23.1068633Z 2025-03-17T18:45:23.1068797Z >>> torch.distributed.checkpoint.load_state_dict( 2025-03-17T18:45:23.1068912Z >>> state_dict=model_state_dict, 2025-03-17T18:45:23.1069039Z >>> storage_reader=fs_storage_reader, 2025-03-17T18:45:23.1069127Z >>> ) 2025-03-17T18:45:23.1069215Z 2025-03-17T18:45:23.1069424Z >>> # module.load_state_dict() function might have customized steps 2025-03-17T18:45:23.1069557Z >>> # to flush the state_dict, must call it to 2025-03-17T18:45:23.1069679Z >>> # ensure correct behavior. 2025-03-17T18:45:23.1069815Z >>> my_model.load_state_dict(model_state_dict) 2025-03-17T18:45:23.1069910Z 2025-03-17T18:45:23.1070000Z .. note:: 2025-03-17T18:45:23.1070225Z load_state_dict uses collectives to coordinate reads across ranks. 2025-03-17T18:45:23.1070523Z For NCCL-based process groups, internal tensor representations of 2025-03-17T18:45:23.1070781Z objects must be moved to the GPU device before communication takes place. 2025-03-17T18:45:23.1071008Z In this case, the device used is given by ``torch.cuda.current_device()`` 2025-03-17T18:45:23.1071247Z and it is the user's responsibility to ensure that this is set so that each 2025-03-17T18:45:23.1071440Z rank has an individual GPU, via ``torch.cuda.set_device()``. 2025-03-17T18:45:23.1071532Z 2025-03-17T18:45:23.1071790Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1071886Z 2025-03-17T18:45:23.1071990Z warnings.warn(msg) 2025-03-17T18:45:23.1072083Z 2025-03-17T18:45:23.1072294Z --- Parse Warning: 59 / 116 --- 2025-03-17T18:45:23.1073290Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=save in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_saver.py line=85. 2025-03-17T18:45:23.1073558Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1073651Z 2025-03-17T18:45:23.1073775Z Save a distributed model in SPMD style. 2025-03-17T18:45:23.1073868Z 2025-03-17T18:45:23.1074067Z This function is different from ``torch.save()`` as it handles 2025-03-17T18:45:23.1074342Z ``ShardedTensor`` , and ``DTensor`` by having each rank only save their local shards. 2025-03-17T18:45:23.1074426Z 2025-03-17T18:45:23.1074693Z For each ``Stateful`` object (having both a ``state_dict`` and a ``load_state_dict``), 2025-03-17T18:45:23.1074874Z save will call ``state_dict`` before serialization. 2025-03-17T18:45:23.1074966Z 2025-03-17T18:45:23.1075059Z .. warning:: 2025-03-17T18:45:23.1075307Z There is no guarantees of Backwards Compatibility across PyTorch versions 2025-03-17T18:45:23.1075411Z for saved state_dicts. 2025-03-17T18:45:23.1075502Z 2025-03-17T18:45:23.1075594Z .. warning:: 2025-03-17T18:45:23.1075813Z If using the `process_group` argument, make sure that only its ranks 2025-03-17T18:45:23.1076023Z call `save_state_dict` and that all data in state_dict belong to it. 2025-03-17T18:45:23.1076139Z 2025-03-17T18:45:23.1076227Z .. note:: 2025-03-17T18:45:23.1076497Z When saving checkpoint for FSDP's `ShardingStrategy.HYBRID_SHARD`, only one of 2025-03-17T18:45:23.1076761Z the shard_group should be calling `save_state_dict` and the corresponding process 2025-03-17T18:45:23.1076880Z group needs to be passed in. 2025-03-17T18:45:23.1076964Z 2025-03-17T18:45:23.1077058Z .. note:: 2025-03-17T18:45:23.1077331Z If no process group is available, this function assumes the intention is to save the 2025-03-17T18:45:23.1077454Z state_dict in the local process. 2025-03-17T18:45:23.1077534Z 2025-03-17T18:45:23.1077627Z .. note: 2025-03-17T18:45:23.1077771Z Rank 0 is assumed to be the coordinator rank. 2025-03-17T18:45:23.1077870Z 2025-03-17T18:45:23.1077955Z 2025-03-17T18:45:23.1078041Z Args: 2025-03-17T18:45:23.1078213Z state_dict (Dict[str, Any]): The state_dict to save. 2025-03-17T18:45:23.1078363Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:23.1078595Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:23.1078803Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:23.1078988Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:23.1079091Z (Default: ``None``) 2025-03-17T18:45:23.1079242Z storage_writer (Optional[StorageWriter]): 2025-03-17T18:45:23.1079459Z Instance of StorageWriter used to perform writes. If this is not 2025-03-17T18:45:23.1079682Z specified, DCP will automatically infer the writer based on the 2025-03-17T18:45:23.1079938Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:23.1080065Z be raised. (Default: ``None``) 2025-03-17T18:45:23.1080188Z planner (Optional[SavePlanner]): 2025-03-17T18:45:23.1080406Z Instance of SavePlanner. If this is not specificed, the default 2025-03-17T18:45:23.1080543Z planner will be used. (Default: ``None``) 2025-03-17T18:45:23.1080692Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:23.1080880Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:23.1080996Z (Default: ``None``) 2025-03-17T18:45:23.1081096Z no_dist (bool): 2025-03-17T18:45:23.1081284Z If ``True``, this function will assume the intent is to load 2025-03-17T18:45:23.1081464Z a checkpoint without using cross-rank synchronization. 2025-03-17T18:45:23.1081582Z (Default: ``False``) 2025-03-17T18:45:23.1081668Z 2025-03-17T18:45:23.1081770Z Returns: 2025-03-17T18:45:23.1081939Z Metadata: Metadata object for the saved checkpoint. 2025-03-17T18:45:23.1082037Z 2025-03-17T18:45:23.1082129Z Example: 2025-03-17T18:45:23.1082244Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1082350Z >>> my_model = MyModule() 2025-03-17T18:45:23.1082449Z 2025-03-17T18:45:23.1082566Z >>> state_dict = {"model": my_model} 2025-03-17T18:45:23.1082660Z 2025-03-17T18:45:23.1082899Z >>> fs_storage_writer = torch.distributed.checkpoint.FileSystemWriter( 2025-03-17T18:45:23.1083010Z ... "/checkpoint/1" 2025-03-17T18:45:23.1083125Z ... ) 2025-03-17T18:45:23.1083269Z >>> torch.distributed.checkpoint.save( 2025-03-17T18:45:23.1083377Z >>> state_dict=state_dict, 2025-03-17T18:45:23.1083509Z >>> storage_writer=fs_storage_writer, 2025-03-17T18:45:23.1083599Z >>> ) 2025-03-17T18:45:23.1083681Z 2025-03-17T18:45:23.1083780Z .. note:: 2025-03-17T18:45:23.1084003Z save_state_dict uses collectives to coordinate writes across ranks. 2025-03-17T18:45:23.1084234Z For NCCL-based process groups, internal tensor representations of 2025-03-17T18:45:23.1084469Z objects must be moved to the GPU device before communication takes place. 2025-03-17T18:45:23.1084733Z In this case, the device used is given by ``torch.cuda.current_device()`` 2025-03-17T18:45:23.1084945Z and it is the user's responsibility to ensure that this is set so that 2025-03-17T18:45:23.1085158Z each rank has an individual GPU, via ``torch.cuda.set_device()``. 2025-03-17T18:45:23.1085245Z 2025-03-17T18:45:23.1085513Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1085595Z 2025-03-17T18:45:23.1085707Z warnings.warn(msg) 2025-03-17T18:45:23.1085792Z 2025-03-17T18:45:23.1086000Z --- Parse Warning: 60 / 116 --- 2025-03-17T18:45:23.1087014Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=async_save in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/checkpoint/state_dict_saver.py line=195. 2025-03-17T18:45:23.1087291Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1087566Z Asynchronous version of ``save``. This code first de-stages the state_dict on to the 2025-03-17T18:45:23.1087875Z staging storage (defaults to CPU memory), and then calls the `save` in a separate thread. 2025-03-17T18:45:23.1087961Z 2025-03-17T18:45:23.1088069Z .. warning:: 2025-03-17T18:45:23.1088237Z This feature is experimental and subject to change. 2025-03-17T18:45:23.1088332Z 2025-03-17T18:45:23.1088421Z Args: 2025-03-17T18:45:23.1088591Z state_dict (Dict[str, Any]): The state_dict to save. 2025-03-17T18:45:23.1088744Z checkpoint_id (Union[str, os.PathLike, None]): 2025-03-17T18:45:23.1089027Z The ID of this checkpoint instance. The meaning of the checkpoint_id 2025-03-17T18:45:23.1089238Z depends on the storage. It can be a path to a folder or to a file. 2025-03-17T18:45:23.1089419Z It can also be a key if the storage is a key-value store. 2025-03-17T18:45:23.1089525Z (Default: ``None``) 2025-03-17T18:45:23.1089674Z storage_writer (Optional[StorageWriter]): 2025-03-17T18:45:23.1089892Z Instance of StorageWriter used to perform 'stage' and 'save'. If 2025-03-17T18:45:23.1090147Z this is not specified, DCP will automatically infer the writer based on the 2025-03-17T18:45:23.1090358Z checkpoint_id. If checkpoint_id is also None, an exception will 2025-03-17T18:45:23.1090482Z be raised. (Default: ``None``) 2025-03-17T18:45:23.1090607Z planner (Optional[SavePlanner]): 2025-03-17T18:45:23.1090829Z Instance of SavePlanner. If this is not specificed, the default 2025-03-17T18:45:23.1090964Z planner will be used. (Default: ``None``) 2025-03-17T18:45:23.1091109Z process_group (Optional[ProcessGroup]): 2025-03-17T18:45:23.1091296Z ProcessGroup to be used for cross-rank synchronization. 2025-03-17T18:45:23.1091412Z (Default: ``None``) 2025-03-17T18:45:23.1091498Z 2025-03-17T18:45:23.1091595Z Returns: 2025-03-17T18:45:23.1091815Z Future: A future holding the resultant Metadata object from `save`. 2025-03-17T18:45:23.1091910Z 2025-03-17T18:45:23.1092002Z Example: 2025-03-17T18:45:23.1092143Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1092251Z >>> my_model = MyModule() 2025-03-17T18:45:23.1092346Z 2025-03-17T18:45:23.1092466Z >>> state_dict = {"model": my_model} 2025-03-17T18:45:23.1092560Z 2025-03-17T18:45:23.1092799Z >>> fs_storage_writer = torch.distributed.checkpoint.FileSystemWriter( 2025-03-17T18:45:23.1092916Z ... "/checkpoint/1" 2025-03-17T18:45:23.1093005Z ... ) 2025-03-17T18:45:23.1093229Z >>> checkpoint_future = torch.distributed.checkpoint.async_save( 2025-03-17T18:45:23.1093342Z >>> state_dict=state_dict, 2025-03-17T18:45:23.1093509Z >>> storage_writer=fs_storage_writer, 2025-03-17T18:45:23.1093599Z >>> ) 2025-03-17T18:45:23.1093694Z >>> 2025-03-17T18:45:23.1093800Z >>> # ... do some work ... 2025-03-17T18:45:23.1093885Z >>> 2025-03-17T18:45:23.1094014Z >>> checkpoint_future.result() 2025-03-17T18:45:23.1094099Z 2025-03-17T18:45:23.1094196Z 2025-03-17T18:45:23.1094451Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1094547Z 2025-03-17T18:45:23.1094648Z warnings.warn(msg) 2025-03-17T18:45:23.1094743Z 2025-03-17T18:45:23.1094938Z --- Parse Warning: 61 / 116 --- 2025-03-17T18:45:23.1096028Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=construct_and_record_rdzv_event in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/elastic/events/__init__.py line=94. 2025-03-17T18:45:23.1096299Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1096394Z 2025-03-17T18:45:23.1096604Z Initialize rendezvous event object and record its operations. 2025-03-17T18:45:23.1096699Z 2025-03-17T18:45:23.1096785Z Args: 2025-03-17T18:45:23.1096933Z run_id (str): The run id of the rendezvous. 2025-03-17T18:45:23.1097086Z message (str): The message describing the event. 2025-03-17T18:45:23.1097355Z node_state (NodeState): The state of the node (INIT, RUNNING, SUCCEEDED, FAILED). 2025-03-17T18:45:23.1097543Z name (str): Event name. (E.g. Current action being performed). 2025-03-17T18:45:23.1097724Z hostname (str): Hostname of the node. 2025-03-17T18:45:23.1097874Z pid (Optional[int]): The process id of the node. 2025-03-17T18:45:23.1098132Z master_endpoint (str): The master endpoint for the rendezvous store, if known. 2025-03-17T18:45:23.1098407Z local_id (Optional[int]): The local_id of the node, if defined in dynamic_rendezvous.py 2025-03-17T18:45:23.1098583Z rank (Optional[int]): The rank of the node, if known. 2025-03-17T18:45:23.1098673Z Returns: 2025-03-17T18:45:23.1098772Z None 2025-03-17T18:45:23.1098862Z Example: 2025-03-17T18:45:23.1099010Z >>> # See DynamicRendezvousHandler class 2025-03-17T18:45:23.1099104Z >>> def _record( 2025-03-17T18:45:23.1099204Z ... self, 2025-03-17T18:45:23.1099304Z ... message: str, 2025-03-17T18:45:23.1099458Z ... node_state: NodeState = NodeState.RUNNING, 2025-03-17T18:45:23.1099574Z ... rank: Optional[int] = None, 2025-03-17T18:45:23.1099681Z ... ) -> None: 2025-03-17T18:45:23.1099805Z ... construct_and_record_rdzv_event( 2025-03-17T18:45:23.1099981Z ... name=f"{self.__class__.__name__}.{get_method_name()}", 2025-03-17T18:45:23.1100107Z ... run_id=self._settings.run_id, 2025-03-17T18:45:23.1100218Z ... message=message, 2025-03-17T18:45:23.1100329Z ... node_state=node_state, 2025-03-17T18:45:23.1100464Z ... hostname=self._this_node.addr, 2025-03-17T18:45:23.1100579Z ... pid=self._this_node.pid, 2025-03-17T18:45:23.1100718Z ... local_id=self._this_node.local_id, 2025-03-17T18:45:23.1100844Z ... rank=rank, 2025-03-17T18:45:23.1100939Z ... ) 2025-03-17T18:45:23.1101024Z 2025-03-17T18:45:23.1101286Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1101383Z 2025-03-17T18:45:23.1101488Z warnings.warn(msg) 2025-03-17T18:45:23.1101584Z 2025-03-17T18:45:23.1101779Z --- Parse Warning: 62 / 116 --- 2025-03-17T18:45:23.1102723Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=MixedPrecision in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py line=114. 2025-03-17T18:45:23.1103019Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1103116Z 2025-03-17T18:45:23.1103284Z This configures FSDP-native mixed precision training. 2025-03-17T18:45:23.1103382Z 2025-03-17T18:45:23.1103474Z Attributes: 2025-03-17T18:45:23.1103725Z param_dtype (Optional[torch.dtype]): This specifies the dtype for model 2025-03-17T18:45:23.1103931Z parameters during forward and backward and thus the dtype for 2025-03-17T18:45:23.1104166Z forward and backward computation. Outside forward and backward, the 2025-03-17T18:45:23.1104368Z *sharded* parameters are kept in full precision (e.g. for the 2025-03-17T18:45:23.1104593Z optimizer step), and for model checkpointing, the parameters are 2025-03-17T18:45:23.1104754Z always saved in full precision. (Default: ``None``) 2025-03-17T18:45:23.1104982Z reduce_dtype (Optional[torch.dtype]): This specifies the dtype for 2025-03-17T18:45:23.1105203Z gradient reduction (i.e. reduce-scatter or all-reduce). If this is 2025-03-17T18:45:23.1105396Z ``None`` but ``param_dtype`` is not ``None``, then this takes on 2025-03-17T18:45:23.1105600Z the ``param_dtype`` value, still running gradient reduction in low 2025-03-17T18:45:23.1105830Z precision. This is permitted to differ from ``param_dtype``, e.g. 2025-03-17T18:45:23.1106032Z to force gradient reduction to run in full precision. (Default: 2025-03-17T18:45:23.1106134Z ``None``) 2025-03-17T18:45:23.1106397Z buffer_dtype (Optional[torch.dtype]): This specifies the dtype for 2025-03-17T18:45:23.1106702Z buffers. FSDP does not shard buffers. Rather, FSDP casts them to 2025-03-17T18:45:23.1106909Z ``buffer_dtype`` in the first forward pass and keeps them in that 2025-03-17T18:45:23.1107142Z dtype thereafter. For model checkpointing, the buffers are saved 2025-03-17T18:45:23.1107334Z in full precision except for ``LOCAL_STATE_DICT``. (Default: 2025-03-17T18:45:23.1107441Z ``None``) 2025-03-17T18:45:23.1107646Z keep_low_precision_grads (bool): If ``False``, then FSDP upcasts 2025-03-17T18:45:23.1107885Z gradients to full precision after the backward pass in preparation 2025-03-17T18:45:23.1108097Z for the optimizer step. If ``True``, then FSDP keeps the gradients 2025-03-17T18:45:23.1108315Z in the dtype used for gradient reduction, which can save memory if 2025-03-17T18:45:23.1108531Z using a custom optimizer that supports running in low precision. 2025-03-17T18:45:23.1108651Z (Default: ``False``) 2025-03-17T18:45:23.1108867Z cast_forward_inputs (bool): If ``True``, then this FSDP module casts 2025-03-17T18:45:23.1109085Z its forward args and kwargs to ``param_dtype``. This is to ensure 2025-03-17T18:45:23.1109309Z that parameter and input dtypes match for forward computation, as 2025-03-17T18:45:23.1109532Z required by many ops. This may need to be set to ``True`` when only 2025-03-17T18:45:23.1109762Z applying mixed precision to some but not all FSDP modules, in which 2025-03-17T18:45:23.1110014Z case a mixed-precision FSDP submodule needs to recast its inputs. 2025-03-17T18:45:23.1110120Z (Default: ``False``) 2025-03-17T18:45:23.1110358Z cast_root_forward_inputs (bool): If ``True``, then the root FSDP module 2025-03-17T18:45:23.1110565Z casts its forward args and kwargs to ``param_dtype``, overriding 2025-03-17T18:45:23.1110773Z the value of ``cast_forward_inputs``. For non-root FSDP modules, 2025-03-17T18:45:23.1110923Z this does not do anything. (Default: ``True``) 2025-03-17T18:45:23.1111157Z _module_classes_to_ignore: (Sequence[Type[nn.Module]]): This specifies 2025-03-17T18:45:23.1111374Z module classes to ignore for mixed precision when using an 2025-03-17T18:45:23.1111574Z ``auto_wrap_policy``: Modules of these classes will have FSDP 2025-03-17T18:45:23.1111793Z applied to them separately with mixed precision disabled (meaning 2025-03-17T18:45:23.1112015Z that the final FSDP construction would deviate from the specified 2025-03-17T18:45:23.1112215Z policy). If ``auto_wrap_policy`` is not specified, then this does 2025-03-17T18:45:23.1112431Z not do anything. This API is experimental and subject to change. 2025-03-17T18:45:23.1112549Z (Default: ``(_BatchNorm,)``) 2025-03-17T18:45:23.1112647Z 2025-03-17T18:45:23.1112829Z .. note:: This API is experimental and subject to change. 2025-03-17T18:45:23.1112920Z 2025-03-17T18:45:23.1113150Z .. note:: Only floating point tensors are cast to their specified dtypes. 2025-03-17T18:45:23.1113242Z 2025-03-17T18:45:23.1113431Z .. note:: In ``summon_full_params``, parameters are forced to full 2025-03-17T18:45:23.1113561Z precision, but buffers are not. 2025-03-17T18:45:23.1113651Z 2025-03-17T18:45:23.1113873Z .. note:: Layer norm and batch norm accumulate in ``float32`` even when 2025-03-17T18:45:23.1114089Z their inputs are in a low precision like ``float16`` or ``bfloat16``. 2025-03-17T18:45:23.1114337Z Disabling FSDP's mixed precision for those norm modules only means that 2025-03-17T18:45:23.1114560Z the affine parameters are kept in ``float32``. However, this incurs 2025-03-17T18:45:23.1114810Z separate all-gathers and reduce-scatters for those norm modules, which 2025-03-17T18:45:23.1115082Z may be inefficient, so if the workload permits, the user should prefer 2025-03-17T18:45:23.1115245Z to still apply mixed precision to those modules. 2025-03-17T18:45:23.1115332Z 2025-03-17T18:45:23.1115551Z .. note:: By default, if the user passes a model with any ``_BatchNorm`` 2025-03-17T18:45:23.1115766Z modules and specifies an ``auto_wrap_policy``, then the batch norm 2025-03-17T18:45:23.1116006Z modules will have FSDP applied to them separately with mixed precision 2025-03-17T18:45:23.1116185Z disabled. See the ``_module_classes_to_ignore`` argument. 2025-03-17T18:45:23.1116286Z 2025-03-17T18:45:23.1116496Z .. note:: ``MixedPrecision`` has ``cast_root_forward_inputs=True`` and 2025-03-17T18:45:23.1116727Z ``cast_forward_inputs=False`` by default. For the root FSDP instance, 2025-03-17T18:45:23.1116908Z its ``cast_root_forward_inputs`` takes precedence over its 2025-03-17T18:45:23.1117102Z ``cast_forward_inputs``. For non-root FSDP instances, their 2025-03-17T18:45:23.1117322Z ``cast_root_forward_inputs`` values are ignored. The default setting is 2025-03-17T18:45:23.1117559Z sufficient for the typical case where each FSDP instance has the same 2025-03-17T18:45:23.1117789Z ``MixedPrecision`` configuration and only needs to cast inputs to the 2025-03-17T18:45:23.1117989Z ``param_dtype`` at the beginning of the model's forward pass. 2025-03-17T18:45:23.1118074Z 2025-03-17T18:45:23.1118295Z .. note:: For nested FSDP instances with different ``MixedPrecision`` 2025-03-17T18:45:23.1118537Z configurations, we recommend setting individual ``cast_forward_inputs`` 2025-03-17T18:45:23.1118781Z values to configure casting inputs or not before each instance's 2025-03-17T18:45:23.1118981Z forward. In such a case, since the casts happen before each FSDP 2025-03-17T18:45:23.1119212Z instance's forward, a parent FSDP instance should have its non-FSDP 2025-03-17T18:45:23.1119453Z submodules run before its FSDP submodules to avoid the activation dtype 2025-03-17T18:45:23.1119676Z being changed due to a different ``MixedPrecision`` configuration. 2025-03-17T18:45:23.1119761Z 2025-03-17T18:45:23.1119867Z Example:: 2025-03-17T18:45:23.1119953Z 2025-03-17T18:45:23.1120126Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1120299Z >>> model = nn.Sequential(nn.Linear(3, 3), nn.Linear(3, 3)) 2025-03-17T18:45:23.1120408Z >>> model[1] = FSDP( 2025-03-17T18:45:23.1120505Z >>> model[1], 2025-03-17T18:45:23.1120830Z >>> mixed_precision=MixedPrecision(param_dtype=torch.float16, cast_forward_inputs=True), 2025-03-17T18:45:23.1120919Z >>> ) 2025-03-17T18:45:23.1121027Z >>> model = FSDP( 2025-03-17T18:45:23.1121120Z >>> model, 2025-03-17T18:45:23.1121445Z >>> mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, cast_forward_inputs=True), 2025-03-17T18:45:23.1121538Z >>> ) 2025-03-17T18:45:23.1121632Z 2025-03-17T18:45:23.1121846Z The above shows a working example. On the other hand, if ``model[1]`` 2025-03-17T18:45:23.1122062Z were replaced with ``model[0]``, meaning that the submodule using 2025-03-17T18:45:23.1122293Z different ``MixedPrecision`` ran its forward first, then ``model[1]`` 2025-03-17T18:45:23.1122529Z would incorrectly see ``float16`` activations instead of ``bfloat16`` 2025-03-17T18:45:23.1122615Z ones. 2025-03-17T18:45:23.1122709Z 2025-03-17T18:45:23.1122796Z 2025-03-17T18:45:23.1123064Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1123150Z 2025-03-17T18:45:23.1123264Z warnings.warn(msg) 2025-03-17T18:45:23.1123350Z 2025-03-17T18:45:23.1123574Z --- Parse Warning: 63 / 116 --- 2025-03-17T18:45:23.1124635Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=FullStateDictConfig in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/api.py line=295. 2025-03-17T18:45:23.1124918Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1125007Z 2025-03-17T18:45:23.1125230Z ``FullStateDictConfig`` is a config class meant to be used with 2025-03-17T18:45:23.1125433Z ``StateDictType.FULL_STATE_DICT``. We recommend enabling both 2025-03-17T18:45:23.1125656Z ``offload_to_cpu=True`` and ``rank0_only=True`` when saving full state 2025-03-17T18:45:23.1125887Z dicts to save GPU memory and CPU memory, respectively. This config class 2025-03-17T18:45:23.1126106Z is meant to be used via the :func:`state_dict_type` context manager as 2025-03-17T18:45:23.1126199Z follows: 2025-03-17T18:45:23.1126297Z 2025-03-17T18:45:23.1126433Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1126693Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:23.1126827Z >>> fsdp = FSDP(model, auto_wrap_policy=...) 2025-03-17T18:45:23.1127047Z >>> cfg = FullStateDictConfig(offload_to_cpu=True, rank0_only=True) 2025-03-17T18:45:23.1127273Z >>> with FSDP.state_dict_type(fsdp, StateDictType.FULL_STATE_DICT, cfg): 2025-03-17T18:45:23.1127399Z >>> state = fsdp.state_dict() 2025-03-17T18:45:23.1127614Z >>> # `state` will be empty on non rank 0 and contain CPU tensors on rank 0. 2025-03-17T18:45:23.1127869Z >>> # To reload checkpoint for inference, finetuning, transfer learning, etc: 2025-03-17T18:45:23.1128135Z >>> model = model_fn() # Initialize model in preparation for wrapping with FSDP 2025-03-17T18:45:23.1128253Z >>> if dist.get_rank() == 0: 2025-03-17T18:45:23.1128440Z >>> # Load checkpoint only on rank 0 to avoid memory redundancy 2025-03-17T18:45:23.1128599Z >>> state_dict = torch.load("my_checkpoint.pt") 2025-03-17T18:45:23.1128723Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:23.1128974Z >>> # All ranks initialize FSDP module as usual. `sync_module_states` argument 2025-03-17T18:45:23.1129221Z >>> # communicates loaded checkpoint states from rank 0 to rest of the world. 2025-03-17T18:45:23.1129354Z >>> fsdp = FSDP( 2025-03-17T18:45:23.1129446Z ... model, 2025-03-17T18:45:23.1129594Z ... device_id=torch.cuda.current_device(), 2025-03-17T18:45:23.1129704Z ... auto_wrap_policy=..., 2025-03-17T18:45:23.1129829Z ... sync_module_states=True, 2025-03-17T18:45:23.1129916Z ... ) 2025-03-17T18:45:23.1130136Z >>> # After this point, all ranks have FSDP model with loaded checkpoint. 2025-03-17T18:45:23.1130233Z 2025-03-17T18:45:23.1130328Z Attributes: 2025-03-17T18:45:23.1130540Z rank0_only (bool): If ``True``, then only rank 0 saves the full state 2025-03-17T18:45:23.1130755Z dict, and nonzero ranks save an empty dict. If ``False``, then all 2025-03-17T18:45:23.1130929Z ranks save the full state dict. (Default: ``False``) 2025-03-17T18:45:23.1131014Z 2025-03-17T18:45:23.1131283Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1131373Z 2025-03-17T18:45:23.1131482Z warnings.warn(msg) 2025-03-17T18:45:23.1131567Z 2025-03-17T18:45:23.1131773Z --- Parse Warning: 64 / 116 --- 2025-03-17T18:45:23.1132983Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=FullyShardedDataParallel.set_state_dict_type in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=639. 2025-03-17T18:45:23.1133264Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1133565Z Set the ``state_dict_type`` of all the descendant FSDP modules of the target module. 2025-03-17T18:45:23.1133663Z 2025-03-17T18:45:23.1133932Z Also takes (optional) configuration for the model's and optimizer's state dict. 2025-03-17T18:45:23.1134155Z The target module does not have to be a FSDP module. If the target 2025-03-17T18:45:23.1134374Z module is a FSDP module, its ``state_dict_type`` will also be changed. 2025-03-17T18:45:23.1134470Z 2025-03-17T18:45:23.1134673Z .. note:: This API should be called for only the top-level (root) 2025-03-17T18:45:23.1134774Z module. 2025-03-17T18:45:23.1134864Z 2025-03-17T18:45:23.1135087Z .. note:: This API enables users to transparently use the conventional 2025-03-17T18:45:23.1135285Z ``state_dict`` API to take model checkpoints in cases where the 2025-03-17T18:45:23.1135515Z root FSDP module is wrapped by another ``nn.Module``. For example, 2025-03-17T18:45:23.1135733Z the following will ensure ``state_dict`` is called on all non-FSDP 2025-03-17T18:45:23.1135981Z instances, while dispatching into `sharded_state_dict` implementation 2025-03-17T18:45:23.1136076Z for FSDP: 2025-03-17T18:45:23.1136171Z 2025-03-17T18:45:23.1136269Z Example:: 2025-03-17T18:45:23.1136365Z 2025-03-17T18:45:23.1136505Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1136627Z >>> model = DDP(FSDP(...)) 2025-03-17T18:45:23.1136939Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:23.1137106Z >>> model, 2025-03-17T18:45:23.1137241Z >>> StateDictType.SHARDED_STATE_DICT, 2025-03-17T18:45:23.1137475Z >>> state_dict_config = ShardedStateDictConfig(offload_to_cpu=True), 2025-03-17T18:45:23.1137704Z >>> optim_state_dict_config = OptimStateDictConfig(offload_to_cpu=True), 2025-03-17T18:45:23.1137807Z >>> ) 2025-03-17T18:45:23.1137943Z >>> param_state_dict = model.state_dict() 2025-03-17T18:45:23.1138132Z >>> optim_state_dict = FSDP.optim_state_dict(model, optim) 2025-03-17T18:45:23.1138217Z 2025-03-17T18:45:23.1138318Z Args: 2025-03-17T18:45:23.1138487Z module (torch.nn.Module): Root module. 2025-03-17T18:45:23.1138734Z state_dict_type (StateDictType): the desired ``state_dict_type`` to set. 2025-03-17T18:45:23.1138979Z state_dict_config (Optional[StateDictConfig]): the configuration for the 2025-03-17T18:45:23.1139114Z target ``state_dict_type``. 2025-03-17T18:45:23.1139376Z optim_state_dict_config (Optional[OptimStateDictConfig]): the configuration 2025-03-17T18:45:23.1139514Z for the optimizer state dict. 2025-03-17T18:45:23.1139600Z 2025-03-17T18:45:23.1139706Z Returns: 2025-03-17T18:45:23.1139938Z A StateDictSettings that include the previous state_dict type and 2025-03-17T18:45:23.1140077Z configuration for the module. 2025-03-17T18:45:23.1140166Z 2025-03-17T18:45:23.1140441Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1140530Z 2025-03-17T18:45:23.1140649Z warnings.warn(msg) 2025-03-17T18:45:23.1140736Z 2025-03-17T18:45:23.1140952Z --- Parse Warning: 65 / 116 --- 2025-03-17T18:45:23.1142147Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=FullyShardedDataParallel.state_dict_type in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=797. 2025-03-17T18:45:23.1142433Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1142689Z Set the ``state_dict_type`` of all the descendant FSDP modules of the target module. 2025-03-17T18:45:23.1142853Z 2025-03-17T18:45:23.1143188Z This context manager has the same functions as :meth:`set_state_dict_type`. Read the document of 2025-03-17T18:45:23.1143338Z :meth:`set_state_dict_type` for the detail. 2025-03-17T18:45:23.1143426Z 2025-03-17T18:45:23.1143539Z Example:: 2025-03-17T18:45:23.1143625Z 2025-03-17T18:45:23.1143776Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1143892Z >>> model = DDP(FSDP(...)) 2025-03-17T18:45:23.1144026Z >>> with FSDP.state_dict_type( 2025-03-17T18:45:23.1144125Z >>> model, 2025-03-17T18:45:23.1144271Z >>> StateDictType.SHARDED_STATE_DICT, 2025-03-17T18:45:23.1144364Z >>> ): 2025-03-17T18:45:23.1144493Z >>> checkpoint = model.state_dict() 2025-03-17T18:45:23.1144589Z 2025-03-17T18:45:23.1144681Z Args: 2025-03-17T18:45:23.1144826Z module (torch.nn.Module): Root module. 2025-03-17T18:45:23.1145063Z state_dict_type (StateDictType): the desired ``state_dict_type`` to set. 2025-03-17T18:45:23.1145311Z state_dict_config (Optional[StateDictConfig]): the model ``state_dict`` 2025-03-17T18:45:23.1145479Z configuration for the target ``state_dict_type``. 2025-03-17T18:45:23.1145729Z optim_state_dict_config (Optional[OptimStateDictConfig]): the optimizer 2025-03-17T18:45:23.1145932Z ``state_dict`` configuration for the target ``state_dict_type``. 2025-03-17T18:45:23.1146057Z 2025-03-17T18:45:23.1148807Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1148923Z 2025-03-17T18:45:23.1149030Z warnings.warn(msg) 2025-03-17T18:45:23.1149129Z 2025-03-17T18:45:23.1149355Z --- Parse Warning: 66 / 116 --- 2025-03-17T18:45:23.1150578Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=FullyShardedDataParallel.optim_state_dict in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=1810. 2025-03-17T18:45:23.1150849Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1150996Z 2025-03-17T18:45:23.1151244Z Transform the state-dict of an optimizer corresponding to a sharded model. 2025-03-17T18:45:23.1151346Z 2025-03-17T18:45:23.1151546Z The given state-dict can be transformed to one of three types: 2025-03-17T18:45:23.1151900Z 1) full optimizer state_dict, 2) sharded optimizer state_dict, 3) local optimizer state_dict. 2025-03-17T18:45:23.1151985Z 2025-03-17T18:45:23.1152238Z For full optimizer state_dict, all states are unflattened and not sharded. 2025-03-17T18:45:23.1152459Z Rank0 only and CPU only can be specified via :meth:`state_dict_type` to 2025-03-17T18:45:23.1152566Z avoid OOM. 2025-03-17T18:45:23.1152654Z 2025-03-17T18:45:23.1152906Z For sharded optimizer state_dict, all states are unflattened but sharded. 2025-03-17T18:45:23.1153117Z CPU only can be specified via :meth:`state_dict_type` to further save 2025-03-17T18:45:23.1153218Z memory. 2025-03-17T18:45:23.1153308Z 2025-03-17T18:45:23.1153545Z For local state_dict, no transformation will be performed. But a state 2025-03-17T18:45:23.1153792Z will be converted from nn.Tensor to ShardedTensor to represent its sharding 2025-03-17T18:45:23.1153925Z nature (this is not supported yet). 2025-03-17T18:45:23.1154014Z 2025-03-17T18:45:23.1154121Z Example:: 2025-03-17T18:45:23.1154207Z 2025-03-17T18:45:23.1154345Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1154597Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:23.1154763Z >>> from torch.distributed.fsdp import StateDictType 2025-03-17T18:45:23.1154993Z >>> from torch.distributed.fsdp import FullStateDictConfig 2025-03-17T18:45:23.1155203Z >>> from torch.distributed.fsdp import FullOptimStateDictConfig 2025-03-17T18:45:23.1155319Z >>> # Save a checkpoint 2025-03-17T18:45:23.1155421Z >>> model, optim = ... 2025-03-17T18:45:23.1155546Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:23.1155637Z >>> model, 2025-03-17T18:45:23.1155775Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:23.1155911Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1156073Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1156167Z >>> ) 2025-03-17T18:45:23.1156298Z >>> state_dict = model.state_dict() 2025-03-17T18:45:23.1156472Z >>> optim_state_dict = FSDP.optim_state_dict(model, optim) 2025-03-17T18:45:23.1156633Z >>> save_a_checkpoint(state_dict, optim_state_dict) 2025-03-17T18:45:23.1156736Z >>> # Load a checkpoint 2025-03-17T18:45:23.1156851Z >>> model, optim = ... 2025-03-17T18:45:23.1157006Z >>> state_dict, optim_state_dict = load_a_checkpoint() 2025-03-17T18:45:23.1157129Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:23.1157221Z >>> model, 2025-03-17T18:45:23.1157358Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:23.1157495Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1157656Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1157745Z >>> ) 2025-03-17T18:45:23.1157877Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:23.1158065Z >>> optim_state_dict = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:23.1158279Z >>> model, optim, optim_state_dict 2025-03-17T18:45:23.1158371Z >>> ) 2025-03-17T18:45:23.1158515Z >>> optim.load_state_dict(optim_state_dict) 2025-03-17T18:45:23.1158601Z 2025-03-17T18:45:23.1158703Z Args: 2025-03-17T18:45:23.1158908Z model (torch.nn.Module): Root module (which may or may not be a 2025-03-17T18:45:23.1159127Z :class:`FullyShardedDataParallel` instance) whose parameters 2025-03-17T18:45:23.1159266Z were passed into the optimizer ``optim``. 2025-03-17T18:45:23.1159493Z optim (torch.optim.Optimizer): Optimizer for ``model`` 's 2025-03-17T18:45:23.1159593Z parameters. 2025-03-17T18:45:23.1159826Z optim_state_dict (Dict[str, Any]): the target optimizer state_dict to 2025-03-17T18:45:23.1160043Z transform. If the value is None, optim.state_dict() will be used. ( 2025-03-17T18:45:23.1160161Z Default: ``None``) 2025-03-17T18:45:23.1160405Z group (dist.ProcessGroup): Model's process group across which parameters 2025-03-17T18:45:23.1160604Z are sharded or ``None`` if using the default process group. ( 2025-03-17T18:45:23.1160706Z Default: ``None``) 2025-03-17T18:45:23.1160803Z 2025-03-17T18:45:23.1160896Z Returns: 2025-03-17T18:45:23.1161105Z Dict[str, Any]: A :class:`dict` containing the optimizer state for 2025-03-17T18:45:23.1161278Z ``model``. The sharding of the optimizer state is based on 2025-03-17T18:45:23.1161383Z ``state_dict_type``. 2025-03-17T18:45:23.1161485Z 2025-03-17T18:45:23.1161745Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1161842Z 2025-03-17T18:45:23.1161945Z warnings.warn(msg) 2025-03-17T18:45:23.1162043Z 2025-03-17T18:45:23.1162249Z --- Parse Warning: 67 / 116 --- 2025-03-17T18:45:23.1163506Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=FullyShardedDataParallel.optim_state_dict_to_load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py line=1908. 2025-03-17T18:45:23.1163799Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1163903Z 2025-03-17T18:45:23.1164270Z Convert an optimizer state-dict so that it can be loaded into the optimizer associated with the FSDP model. 2025-03-17T18:45:23.1164370Z 2025-03-17T18:45:23.1164547Z Given a ``optim_state_dict`` that is transformed through 2025-03-17T18:45:23.1164783Z :meth:`optim_state_dict`, it gets converted to the flattened optimizer 2025-03-17T18:45:23.1165002Z state_dict that can be loaded to ``optim`` which is the optimizer for 2025-03-17T18:45:23.1165206Z ``model``. ``model`` must be sharded by FullyShardedDataParallel. 2025-03-17T18:45:23.1165294Z 2025-03-17T18:45:23.1165443Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1165686Z >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP 2025-03-17T18:45:23.1165859Z >>> from torch.distributed.fsdp import StateDictType 2025-03-17T18:45:23.1166044Z >>> from torch.distributed.fsdp import FullStateDictConfig 2025-03-17T18:45:23.1166264Z >>> from torch.distributed.fsdp import FullOptimStateDictConfig 2025-03-17T18:45:23.1166365Z >>> # Save a checkpoint 2025-03-17T18:45:23.1166487Z >>> model, optim = ... 2025-03-17T18:45:23.1166602Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:23.1166710Z >>> model, 2025-03-17T18:45:23.1166833Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:23.1166982Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1167136Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1167262Z >>> ) 2025-03-17T18:45:23.1167380Z >>> state_dict = model.state_dict() 2025-03-17T18:45:23.1167558Z >>> original_osd = optim.state_dict() 2025-03-17T18:45:23.1167698Z >>> optim_state_dict = FSDP.optim_state_dict( 2025-03-17T18:45:23.1167803Z >>> model, 2025-03-17T18:45:23.1167894Z >>> optim, 2025-03-17T18:45:23.1168028Z >>> optim_state_dict=original_osd 2025-03-17T18:45:23.1168119Z >>> ) 2025-03-17T18:45:23.1168285Z >>> save_a_checkpoint(state_dict, optim_state_dict) 2025-03-17T18:45:23.1168387Z >>> # Load a checkpoint 2025-03-17T18:45:23.1168510Z >>> model, optim = ... 2025-03-17T18:45:23.1168692Z >>> state_dict, optim_state_dict = load_a_checkpoint() 2025-03-17T18:45:23.1168816Z >>> FSDP.set_state_dict_type( 2025-03-17T18:45:23.1168906Z >>> model, 2025-03-17T18:45:23.1169045Z >>> StateDictType.FULL_STATE_DICT, 2025-03-17T18:45:23.1169180Z >>> FullStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1169344Z >>> FullOptimStateDictConfig(rank0_only=False), 2025-03-17T18:45:23.1169433Z >>> ) 2025-03-17T18:45:23.1169567Z >>> model.load_state_dict(state_dict) 2025-03-17T18:45:23.1169722Z >>> optim_state_dict = FSDP.optim_state_dict_to_load( 2025-03-17T18:45:23.1169853Z >>> model, optim, optim_state_dict 2025-03-17T18:45:23.1169938Z >>> ) 2025-03-17T18:45:23.1170080Z >>> optim.load_state_dict(optim_state_dict) 2025-03-17T18:45:23.1170166Z 2025-03-17T18:45:23.1170256Z Args: 2025-03-17T18:45:23.1170466Z model (torch.nn.Module): Root module (which may or may not be a 2025-03-17T18:45:23.1170674Z :class:`FullyShardedDataParallel` instance) whose parameters 2025-03-17T18:45:23.1170822Z were passed into the optimizer ``optim``. 2025-03-17T18:45:23.1171010Z optim (torch.optim.Optimizer): Optimizer for ``model`` 's 2025-03-17T18:45:23.1171124Z parameters. 2025-03-17T18:45:23.1171343Z optim_state_dict (Dict[str, Any]): The optimizer states to be loaded. 2025-03-17T18:45:23.1171562Z is_named_optimizer (bool): Is this optimizer a NamedOptimizer or 2025-03-17T18:45:23.1171761Z KeyedOptimizer. Only set to True if ``optim`` is TorchRec's 2025-03-17T18:45:23.1171984Z KeyedOptimizer or torch.distributed's NamedOptimizer. 2025-03-17T18:45:23.1172190Z load_directly (bool): If this is set to True, this API will also 2025-03-17T18:45:23.1172409Z call optim.load_state_dict(result) before returning the result. 2025-03-17T18:45:23.1172638Z Otherwise, users are responsible to call ``optim.load_state_dict()`` 2025-03-17T18:45:23.1172759Z (Default: ``False``) 2025-03-17T18:45:23.1173004Z group (dist.ProcessGroup): Model's process group across which parameters 2025-03-17T18:45:23.1173208Z are sharded or ``None`` if using the default process group. ( 2025-03-17T18:45:23.1173313Z Default: ``None``) 2025-03-17T18:45:23.1173411Z 2025-03-17T18:45:23.1173675Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1173772Z 2025-03-17T18:45:23.1173879Z warnings.warn(msg) 2025-03-17T18:45:23.1173977Z 2025-03-17T18:45:23.1174176Z --- Parse Warning: 68 / 116 --- 2025-03-17T18:45:23.1175216Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_RemoteModule.__init__ in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=128. 2025-03-17T18:45:23.1175491Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1175589Z 2025-03-17T18:45:23.1175824Z RemoteModule instance can only be created after RPC initialization. 2025-03-17T18:45:23.1175925Z 2025-03-17T18:45:23.1176126Z It creates a user-specified module on a specified remote node. 2025-03-17T18:45:23.1176436Z It behaves like a regular ``nn.Module`` except that the ``forward`` method is 2025-03-17T18:45:23.1176553Z executed on the remote node. 2025-03-17T18:45:23.1176805Z It takes care of autograd recording to ensure the backward pass propagates 2025-03-17T18:45:23.1176966Z gradients back to the corresponding remote module. 2025-03-17T18:45:23.1177344Z It can be shared across processors using `RPC framework `__, 2025-03-17T18:45:23.1177548Z without incurring any overheads of copying the actual module, 2025-03-17T18:45:23.1177770Z which is equivalent to an :class:`~torch.distributed.rpc.RRef` 2025-03-17T18:45:23.1177912Z pointing to the remote module. 2025-03-17T18:45:23.1178012Z 2025-03-17T18:45:23.1178217Z The arguments of ``forward_async`` and ``forward`` are the same as 2025-03-17T18:45:23.1178438Z the ``forward`` method of the module returned by the ``module_cls``. 2025-03-17T18:45:23.1178528Z 2025-03-17T18:45:23.1178855Z Apart from ``forward_async`` and ``forward``, no other methods are supported from nn.Module for now. 2025-03-17T18:45:23.1178939Z 2025-03-17T18:45:23.1179207Z Particularly, to create a hybrid model, typically the local modules should be 2025-03-17T18:45:23.1179590Z created outside of remote modules, rather than as submodules of any remote module (by calling ``add_module``). 2025-03-17T18:45:23.1179701Z Hybrid Example: 2025-03-17T18:45:23.1179821Z >>> class HybridModel(nn.Module): 2025-03-17T18:45:23.1179956Z >>> def __init__(self) -> None: 2025-03-17T18:45:23.1180069Z >>> nn.Module.__init__(self) 2025-03-17T18:45:23.1180231Z >>> self.remote_embedding = RemoteModule(...) 2025-03-17T18:45:23.1180362Z >>> self.local_linear = nn.Linear(...) 2025-03-17T18:45:23.1180463Z 2025-03-17T18:45:23.1180665Z For example, if ``module_cls`` returns an instance of ``nn.Linear``, 2025-03-17T18:45:23.1180936Z that has ``forward`` method signature, ``def forward(input: Tensor) -> Tensor:``, 2025-03-17T18:45:23.1181148Z the generated ``RemoteModule`` will have 2 methods in signature of 2025-03-17T18:45:23.1181296Z ``def forward(input: Tensor) -> Tensor:`` and 2025-03-17T18:45:23.1181488Z ``def forward_async(input: Tensor) -> Future[Tensor]:``. 2025-03-17T18:45:23.1181589Z 2025-03-17T18:45:23.1181682Z .. note:: 2025-03-17T18:45:23.1181843Z If the remote module is placed on a cuda device, 2025-03-17T18:45:23.1182082Z any input CPU tensors will be automatically moved to the same cuda device, 2025-03-17T18:45:23.1182501Z and GPU tensors are returned over the wire according to the device map of the remote worker on TensorPipe RPC backend. 2025-03-17T18:45:23.1182585Z 2025-03-17T18:45:23.1182683Z Args: 2025-03-17T18:45:23.1182985Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:23.1183301Z The device can be a local device or a remote device specified by one of the following remote 2025-03-17T18:45:23.1183400Z formats: 2025-03-17T18:45:23.1183496Z 2025-03-17T18:45:23.1183640Z 1. "rank:/" (ex: "rank:0/cuda:0"). 2025-03-17T18:45:23.1183810Z 2. "/" (ex: "trainer0/cuda:0"). 2025-03-17T18:45:23.1183896Z 2025-03-17T18:45:23.1184156Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:23.1184281Z module_cls (nn.Module): For example, 2025-03-17T18:45:23.1184409Z >>> class MyModule(nn.Module): 2025-03-17T18:45:23.1184517Z >>> def forward(input): 2025-03-17T18:45:23.1184626Z >>> return input + 1 2025-03-17T18:45:23.1184723Z >>> 2025-03-17T18:45:23.1184833Z >>> module_cls = MyModule 2025-03-17T18:45:23.1185076Z args (Sequence, optional): args to be passed to ``module_cls``. 2025-03-17T18:45:23.1185302Z kwargs (Dict, optional): kwargs to be passed to ``module_cls``. 2025-03-17T18:45:23.1185601Z _module_interface_cls (type, optional): The TorchScript interface type for the module 2025-03-17T18:45:23.1185843Z to be created. The type object should be decorated by @torch.jit.interface. 2025-03-17T18:45:23.1186081Z If not provided, the generated RemoteModule is not torchscript-able. 2025-03-17T18:45:23.1186322Z Warning, this is an experimental API and susceptible to frequent changes. 2025-03-17T18:45:23.1186515Z 2025-03-17T18:45:23.1186643Z Returns: 2025-03-17T18:45:23.1186907Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:23.1187146Z user-provided ``module_cls``, it has a blocking ``forward`` method and an 2025-03-17T18:45:23.1187437Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:23.1187589Z on the user-provided module on the remote side. 2025-03-17T18:45:23.1187691Z 2025-03-17T18:45:23.1187788Z Example:: 2025-03-17T18:45:23.1187963Z Run the following code in two different processes: 2025-03-17T18:45:23.1188052Z 2025-03-17T18:45:23.1188187Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1188286Z >>> # On worker 0: 2025-03-17T18:45:23.1188395Z >>> import torch 2025-03-17T18:45:23.1188529Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1188657Z >>> from torch import nn, Tensor 2025-03-17T18:45:23.1188883Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:23.1188985Z >>> 2025-03-17T18:45:23.1189130Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:23.1189266Z >>> remote_linear_module = RemoteModule( 2025-03-17T18:45:23.1189397Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:23.1189498Z >>> ) 2025-03-17T18:45:23.1189613Z >>> input = torch.randn(128, 20) 2025-03-17T18:45:23.1189781Z >>> ret_fut = remote_linear_module.forward_async(input) 2025-03-17T18:45:23.1189884Z >>> ret = ret_fut.wait() 2025-03-17T18:45:23.1189997Z >>> rpc.shutdown() 2025-03-17T18:45:23.1190083Z 2025-03-17T18:45:23.1190212Z >>> # On worker 1: 2025-03-17T18:45:23.1190313Z >>> import torch 2025-03-17T18:45:23.1190455Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1190543Z >>> 2025-03-17T18:45:23.1190697Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:23.1190800Z >>> rpc.shutdown() 2025-03-17T18:45:23.1190900Z 2025-03-17T18:45:23.1191160Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1191247Z 2025-03-17T18:45:23.1191358Z warnings.warn(msg) 2025-03-17T18:45:23.1191443Z 2025-03-17T18:45:23.1191668Z --- Parse Warning: 69 / 116 --- 2025-03-17T18:45:23.1192759Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_RemoteModule.init_from_module_rref in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=505. 2025-03-17T18:45:23.1193047Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1193133Z 2025-03-17T18:45:23.1193477Z Besides the constructor, a RemoteModule instance can also be initialized given a module RRef. 2025-03-17T18:45:23.1193565Z 2025-03-17T18:45:23.1193915Z This alternate initialization method can be particularly useful if we want to create multiple 2025-03-17T18:45:23.1194241Z RemoteModule instances that share the same underlying module and reduce memory consumption. 2025-03-17T18:45:23.1194339Z 2025-03-17T18:45:23.1194624Z Moreover, this also provides a workaround for passing script RemoteModule over RPC, 2025-03-17T18:45:23.1194888Z which is not supported. The recommended way is as follows: 2025-03-17T18:45:23.1194977Z 2025-03-17T18:45:23.1195113Z 1. the sender creates a RemoteModule; 2025-03-17T18:45:23.1195263Z 2. the sender sends its ``module_rref`` over RPC; 2025-03-17T18:45:23.1195620Z 3. the receiver calls this method to initialize another RemoteModule using the same ``module_rref``. 2025-03-17T18:45:23.1195706Z 2025-03-17T18:45:23.1195813Z Example:: 2025-03-17T18:45:23.1195971Z Run the following code in two different processes: 2025-03-17T18:45:23.1196069Z 2025-03-17T18:45:23.1196215Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1196326Z >>> # On worker 0: 2025-03-17T18:45:23.1196423Z >>> import torch 2025-03-17T18:45:23.1196564Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1196679Z >>> from torch import nn, Tensor 2025-03-17T18:45:23.1196917Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:23.1197007Z >>> 2025-03-17T18:45:23.1197163Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:23.1197278Z >>> remote_module = RemoteModule( 2025-03-17T18:45:23.1197421Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:23.1197507Z >>> ) 2025-03-17T18:45:23.1197608Z >>> 2025-03-17T18:45:23.1197725Z >>> remote_module1 = rpc.rpc_sync( 2025-03-17T18:45:23.1197840Z >>> "worker1/cpu", 2025-03-17T18:45:23.1197970Z >>> RemoteModule.init_from_module_rref, 2025-03-17T18:45:23.1198144Z >>> ("worker1/cpu", remote_module1.get_module_rref()), 2025-03-17T18:45:23.1198235Z >>> ) 2025-03-17T18:45:23.1198351Z >>> rpc.shutdown() 2025-03-17T18:45:23.1198436Z 2025-03-17T18:45:23.1198545Z >>> # On worker 1: 2025-03-17T18:45:23.1198647Z >>> import torch 2025-03-17T18:45:23.1198782Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1198879Z >>> 2025-03-17T18:45:23.1199024Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:23.1199138Z >>> rpc.shutdown() 2025-03-17T18:45:23.1199223Z 2025-03-17T18:45:23.1199321Z Args: 2025-03-17T18:45:23.1199650Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:23.1199961Z The device can be a local device or a remote device specified by one of the following remote 2025-03-17T18:45:23.1200053Z formats: 2025-03-17T18:45:23.1200152Z 2025-03-17T18:45:23.1200298Z 1. "rank:/" (ex: "rank:0/cuda:0"). 2025-03-17T18:45:23.1200469Z 2. "/" (ex: "trainer0/cuda:0"). 2025-03-17T18:45:23.1200555Z 2025-03-17T18:45:23.1200818Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:23.1201078Z module_rref (RRef[nn.Module]): The module reference shared by both the caller and 2025-03-17T18:45:23.1201208Z the created remote module. 2025-03-17T18:45:23.1201491Z _module_interface_cls (type, optional): The TorchScript interface type for the module 2025-03-17T18:45:23.1201744Z to be created. The type object should be decorated by @torch.jit.interface. 2025-03-17T18:45:23.1201972Z If not provided, the generated RemoteModule is not torchscript-able. 2025-03-17T18:45:23.1202225Z Warning, this is an experimental API and susceptible to frequent changes. 2025-03-17T18:45:23.1202309Z 2025-03-17T18:45:23.1202413Z Returns: 2025-03-17T18:45:23.1202662Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:23.1202914Z user-provided ``module_rref``, it has a blocking ``forward`` method and an 2025-03-17T18:45:23.1203191Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:23.1203379Z on the user-provided module on the remote side. 2025-03-17T18:45:23.1203490Z 2025-03-17T18:45:23.1203765Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1203853Z 2025-03-17T18:45:23.1203968Z warnings.warn(msg) 2025-03-17T18:45:23.1204054Z 2025-03-17T18:45:23.1204266Z --- Parse Warning: 70 / 116 --- 2025-03-17T18:45:23.1205259Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=RemoteModule in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/nn/api/remote_module.py line=597. 2025-03-17T18:45:23.1205569Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1205654Z 2025-03-17T18:45:23.1205901Z A RemoteModule instance can only be created after RPC initialization. 2025-03-17T18:45:23.1205987Z 2025-03-17T18:45:23.1206203Z It creates a user-specified module on a specified remote node. 2025-03-17T18:45:23.1206444Z It behaves like a regular ``nn.Module`` except that the ``forward`` method is 2025-03-17T18:45:23.1206572Z executed on the remote node. 2025-03-17T18:45:23.1206814Z It takes care of autograd recording to ensure the backward pass propagates 2025-03-17T18:45:23.1206992Z gradients back to the corresponding remote module. 2025-03-17T18:45:23.1207076Z 2025-03-17T18:45:23.1207315Z It generates two methods ``forward_async`` and ``forward`` based on the 2025-03-17T18:45:23.1207534Z signature of the ``forward`` method of ``module_cls``. ``forward_async`` 2025-03-17T18:45:23.1207813Z runs asynchronously and returns a Future. The arguments of ``forward_async`` 2025-03-17T18:45:23.1208014Z and ``forward`` are the same as the ``forward`` method of the module 2025-03-17T18:45:23.1208142Z returned by the ``module_cls``. 2025-03-17T18:45:23.1208229Z 2025-03-17T18:45:23.1208444Z For example, if ``module_cls`` returns an instance of ``nn.Linear``, 2025-03-17T18:45:23.1208698Z that has ``forward`` method signature: ``def forward(input: Tensor) -> Tensor:``, 2025-03-17T18:45:23.1208941Z the generated ``RemoteModule`` will have 2 methods with the signatures: 2025-03-17T18:45:23.1209025Z 2025-03-17T18:45:23.1209190Z | ``def forward(input: Tensor) -> Tensor:`` 2025-03-17T18:45:23.1209359Z | ``def forward_async(input: Tensor) -> Future[Tensor]:`` 2025-03-17T18:45:23.1209457Z 2025-03-17T18:45:23.1209545Z Args: 2025-03-17T18:45:23.1209862Z remote_device (str): Device on the destination worker where we'd like to place this module. 2025-03-17T18:45:23.1210210Z The format should be "/", where the device field can be parsed as torch.device type. 2025-03-17T18:45:23.1210366Z E.g., "trainer0/cpu", "trainer0", "ps0/cuda:0". 2025-03-17T18:45:23.1210617Z In addition, the device field can be optional and the default value is "cpu". 2025-03-17T18:45:23.1210881Z module_cls (nn.Module): Class for the module to be created remotely. For example, 2025-03-17T18:45:23.1210963Z 2025-03-17T18:45:23.1211090Z >>> class MyModule(nn.Module): 2025-03-17T18:45:23.1211198Z >>> def forward(input): 2025-03-17T18:45:23.1211316Z >>> return input + 1 2025-03-17T18:45:23.1211407Z >>> 2025-03-17T18:45:23.1211524Z >>> module_cls = MyModule 2025-03-17T18:45:23.1211610Z 2025-03-17T18:45:23.1211827Z args (Sequence, optional): args to be passed to ``module_cls``. 2025-03-17T18:45:23.1212029Z kwargs (Dict, optional): kwargs to be passed to ``module_cls``. 2025-03-17T18:45:23.1212123Z 2025-03-17T18:45:23.1212212Z Returns: 2025-03-17T18:45:23.1212461Z A remote module instance which wraps the :class:`~nn.Module` created by the 2025-03-17T18:45:23.1212737Z user-provided ``module_cls``, it has a blocking ``forward`` method and an 2025-03-17T18:45:23.1213039Z asynchronous ``forward_async`` method that returns a future of the ``forward`` call 2025-03-17T18:45:23.1213200Z on the user-provided module on the remote side. 2025-03-17T18:45:23.1213283Z 2025-03-17T18:45:23.1213387Z Example:: 2025-03-17T18:45:23.1213545Z Run the following code in two different processes: 2025-03-17T18:45:23.1213638Z 2025-03-17T18:45:23.1213758Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1213869Z >>> # On worker 0: 2025-03-17T18:45:23.1213965Z >>> import torch 2025-03-17T18:45:23.1214165Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1214275Z >>> from torch import nn, Tensor 2025-03-17T18:45:23.1214513Z >>> from torch.distributed.nn.api.remote_module import RemoteModule 2025-03-17T18:45:23.1214599Z >>> 2025-03-17T18:45:23.1214751Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:23.1214876Z >>> remote_linear_module = RemoteModule( 2025-03-17T18:45:23.1215018Z >>> "worker1/cpu", nn.Linear, args=(20, 30), 2025-03-17T18:45:23.1215102Z >>> ) 2025-03-17T18:45:23.1215228Z >>> input = torch.randn(128, 20) 2025-03-17T18:45:23.1215385Z >>> ret_fut = remote_linear_module.forward_async(input) 2025-03-17T18:45:23.1215500Z >>> ret = ret_fut.wait() 2025-03-17T18:45:23.1215600Z >>> rpc.shutdown() 2025-03-17T18:45:23.1215696Z 2025-03-17T18:45:23.1215791Z >>> # On worker 1: 2025-03-17T18:45:23.1215897Z >>> import torch 2025-03-17T18:45:23.1216028Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1216124Z >>> 2025-03-17T18:45:23.1216267Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:23.1216381Z >>> rpc.shutdown() 2025-03-17T18:45:23.1216466Z 2025-03-17T18:45:23.1216675Z Furthermore, a more practical example that is combined with 2025-03-17T18:45:23.1217164Z `DistributedDataParallel `__ (DDP) 2025-03-17T18:45:23.1217514Z can be found in this `tutorial `__. 2025-03-17T18:45:23.1217597Z 2025-03-17T18:45:23.1217892Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1217974Z 2025-03-17T18:45:23.1218077Z warnings.warn(msg) 2025-03-17T18:45:23.1218170Z 2025-03-17T18:45:23.1218370Z --- Parse Warning: 71 / 116 --- 2025-03-17T18:45:23.1219396Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DistributedOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/optimizer.py line=130. 2025-03-17T18:45:23.1219678Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1219763Z 2025-03-17T18:45:23.1220022Z DistributedOptimizer takes remote references to parameters scattered 2025-03-17T18:45:23.1220266Z across workers and applies the given optimizer locally for each parameter. 2025-03-17T18:45:23.1220352Z 2025-03-17T18:45:23.1220609Z This class uses :meth:`~torch.distributed.autograd.get_gradients` in order 2025-03-17T18:45:23.1220769Z to retrieve the gradients for specific parameters. 2025-03-17T18:45:23.1220867Z 2025-03-17T18:45:23.1220968Z Concurrent calls to 2025-03-17T18:45:23.1221189Z :meth:`~torch.distributed.optim.DistributedOptimizer.step`, 2025-03-17T18:45:23.1221337Z either from the same or different clients, will 2025-03-17T18:45:23.1221578Z be serialized on each worker -- as each worker's optimizer can only work 2025-03-17T18:45:23.1221791Z on one set of gradients at a time. However, there is no guarantee that 2025-03-17T18:45:23.1222054Z the full forward-backward-optimizer sequence will execute for one client 2025-03-17T18:45:23.1222331Z at a time. This means that the gradients being applied may not correspond 2025-03-17T18:45:23.1222573Z to the latest forward pass executed on a given worker. Also, there is no 2025-03-17T18:45:23.1222692Z guaranteed ordering across workers. 2025-03-17T18:45:23.1222790Z 2025-03-17T18:45:23.1223056Z `DistributedOptimizer` creates the local optimizer with TorchScript enabled 2025-03-17T18:45:23.1223301Z by default, so that optimizer updates are not blocked by the Python Global 2025-03-17T18:45:23.1223548Z Interpreter Lock (GIL) in the case of multithreaded training (e.g. Distributed 2025-03-17T18:45:23.1223830Z Model Parallel). This feature is currently enabled for most optimizers. You 2025-03-17T18:45:23.1224089Z can also follow `the recipe`__ in PyTorch tutorials to enable TorchScript support 2025-03-17T18:45:23.1224214Z for your own custom optimizers. 2025-03-17T18:45:23.1224301Z 2025-03-17T18:45:23.1224403Z Args: 2025-03-17T18:45:23.1224603Z optimizer_class (optim.Optimizer): the class of optimizer to 2025-03-17T18:45:23.1224726Z instantiate on each worker. 2025-03-17T18:45:23.1224941Z params_rref (list[RRef]): list of RRefs to local or remote parameters 2025-03-17T18:45:23.1225048Z to optimize. 2025-03-17T18:45:23.1225270Z args: arguments to pass to the optimizer constructor on each worker. 2025-03-17T18:45:23.1225509Z kwargs: arguments to pass to the optimizer constructor on each worker. 2025-03-17T18:45:23.1225594Z 2025-03-17T18:45:23.1225699Z Example:: 2025-03-17T18:45:23.1225817Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1226000Z >>> import torch.distributed.autograd as dist_autograd 2025-03-17T18:45:23.1226128Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1226249Z >>> from torch import optim 2025-03-17T18:45:23.1226522Z >>> from torch.distributed.optim import DistributedOptimizer 2025-03-17T18:45:23.1226627Z >>> 2025-03-17T18:45:23.1226769Z >>> with dist_autograd.context() as context_id: 2025-03-17T18:45:23.1226885Z >>> # Forward pass. 2025-03-17T18:45:23.1227085Z >>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3)) 2025-03-17T18:45:23.1227338Z >>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1)) 2025-03-17T18:45:23.1227468Z >>> loss = rref1.to_here() + rref2.to_here() 2025-03-17T18:45:23.1227570Z >>> 2025-03-17T18:45:23.1227669Z >>> # Backward pass. 2025-03-17T18:45:23.1227838Z >>> dist_autograd.backward(context_id, [loss.sum()]) 2025-03-17T18:45:23.1227925Z >>> 2025-03-17T18:45:23.1228032Z >>> # Optimizer. 2025-03-17T18:45:23.1228158Z >>> dist_optim = DistributedOptimizer( 2025-03-17T18:45:23.1228269Z >>> optim.SGD, 2025-03-17T18:45:23.1228366Z >>> [rref1, rref2], 2025-03-17T18:45:23.1228479Z >>> lr=0.05, 2025-03-17T18:45:23.1228565Z >>> ) 2025-03-17T18:45:23.1228683Z >>> dist_optim.step(context_id) 2025-03-17T18:45:23.1228776Z 2025-03-17T18:45:23.1228938Z __ https://github.com/pytorch/tutorials/pull/1465 2025-03-17T18:45:23.1229031Z 2025-03-17T18:45:23.1229291Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1229379Z 2025-03-17T18:45:23.1229482Z warnings.warn(msg) 2025-03-17T18:45:23.1234282Z 2025-03-17T18:45:23.1234563Z --- Parse Warning: 72 / 116 --- 2025-03-17T18:45:23.1235680Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=PostLocalSGDOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/post_localSGD_optimizer.py line=9. 2025-03-17T18:45:23.1235957Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1236123Z 2025-03-17T18:45:23.1236566Z Wraps an arbitrary :class:`torch.optim.Optimizer` and runs `post-local SGD `_, 2025-03-17T18:45:23.1236898Z This optimizer runs local optimizer at every step. 2025-03-17T18:45:23.1237242Z After the warm-up stage, it averages parameters periodically afer the local optimizer is applied. 2025-03-17T18:45:23.1237344Z 2025-03-17T18:45:23.1237435Z Args: 2025-03-17T18:45:23.1237562Z optim: The local optimizer. 2025-03-17T18:45:23.1237790Z averager: A model averager instance to run post-localSGD algorithm. 2025-03-17T18:45:23.1237888Z 2025-03-17T18:45:23.1238050Z Example:: 2025-03-17T18:45:23.1238149Z 2025-03-17T18:45:23.1238286Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1238395Z >>> import torch 2025-03-17T18:45:23.1238519Z >>> import torch.distributed as dist 2025-03-17T18:45:23.1238807Z >>> import torch.distributed.algorithms.model_averaging.averagers as averagers 2025-03-17T18:45:23.1238912Z >>> import torch.nn as nn 2025-03-17T18:45:23.1239123Z >>> from torch.distributed.optim import PostLocalSGDOptimizer 2025-03-17T18:45:23.1239402Z >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( 2025-03-17T18:45:23.1239523Z >>> PostLocalSGDState, 2025-03-17T18:45:23.1239630Z >>> post_localSGD_hook, 2025-03-17T18:45:23.1239728Z >>> ) 2025-03-17T18:45:23.1239816Z >>> 2025-03-17T18:45:23.1239985Z >>> model = nn.parallel.DistributedDataParallel( 2025-03-17T18:45:23.1240131Z >>> module, device_ids=[rank], output_device=rank 2025-03-17T18:45:23.1240222Z >>> ) 2025-03-17T18:45:23.1240320Z >>> 2025-03-17T18:45:23.1240471Z >>> # Register a post-localSGD communication hook. 2025-03-17T18:45:23.1240784Z >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100) 2025-03-17T18:45:23.1240957Z >>> model.register_comm_hook(state, post_localSGD_hook) 2025-03-17T18:45:23.1241053Z >>> 2025-03-17T18:45:23.1241260Z >>> # Create a post-localSGD optimizer that wraps a local optimizer. 2025-03-17T18:45:23.1241523Z >>> # Note that ``warmup_steps`` used in ``PostLocalSGDOptimizer`` must be the same as 2025-03-17T18:45:23.1241732Z >>> # ``start_localSGD_iter`` used in ``PostLocalSGDState``. 2025-03-17T18:45:23.1241957Z >>> local_optim = torch.optim.SGD(params=model.parameters(), lr=0.01) 2025-03-17T18:45:23.1242080Z >>> opt = PostLocalSGDOptimizer( 2025-03-17T18:45:23.1242200Z >>> optim=local_optim, 2025-03-17T18:45:23.1242457Z >>> averager=averagers.PeriodicModelAverager(period=4, warmup_steps=100) 2025-03-17T18:45:23.1242553Z >>> ) 2025-03-17T18:45:23.1242641Z >>> 2025-03-17T18:45:23.1242880Z >>> # In the first 100 steps, DDP runs global gradient averaging at every step. 2025-03-17T18:45:23.1243192Z >>> # After 100 steps, DDP runs gradient averaging within each subgroup (intra-node by default), 2025-03-17T18:45:23.1243588Z >>> # and post-localSGD optimizer runs global model averaging every 4 steps after applying the local optimizer. 2025-03-17T18:45:23.1243698Z >>> for step in range(0, 200): 2025-03-17T18:45:23.1243810Z >>> opt.zero_grad() 2025-03-17T18:45:23.1243929Z >>> loss = loss_fn(output, labels) 2025-03-17T18:45:23.1244041Z >>> loss.backward() 2025-03-17T18:45:23.1244137Z >>> opt.step() 2025-03-17T18:45:23.1244231Z 2025-03-17T18:45:23.1244493Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1244593Z 2025-03-17T18:45:23.1244696Z warnings.warn(msg) 2025-03-17T18:45:23.1244788Z 2025-03-17T18:45:23.1245001Z --- Parse Warning: 73 / 116 --- 2025-03-17T18:45:23.1246176Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ZeroRedundancyOptimizer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/optim/zero_redundancy_optimizer.py line=284. 2025-03-17T18:45:23.1246482Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1246575Z 2025-03-17T18:45:23.1246989Z Wrap an arbitrary :class:`optim.Optimizer ` and shards its states across ranks in the group. 2025-03-17T18:45:23.1247083Z 2025-03-17T18:45:23.1247215Z The sharing is done as described by ZeRO_. 2025-03-17T18:45:23.1247309Z 2025-03-17T18:45:23.1247569Z The local optimizer instance in each rank is only 2025-03-17T18:45:23.1247859Z responsible for updating approximately ``1 / world_size`` parameters and 2025-03-17T18:45:23.1248070Z hence only needs to keep ``1 / world_size`` optimizer states. After 2025-03-17T18:45:23.1248332Z parameters are updated locally, each rank will broadcast its parameters to 2025-03-17T18:45:23.1248522Z all other peers to keep all model replicas in the same state. 2025-03-17T18:45:23.1248733Z ``ZeroRedundancyOptimizer`` can be used in conjunction with 2025-03-17T18:45:23.1249003Z :class:`torch.nn.parallel.DistributedDataParallel` to reduce per-rank peak 2025-03-17T18:45:23.1249115Z memory consumption. 2025-03-17T18:45:23.1249200Z 2025-03-17T18:45:23.1249477Z ``ZeroRedundancyOptimizer`` uses a sorted-greedy algorithm to pack a number 2025-03-17T18:45:23.1249715Z of parameters at each rank. Each parameter belongs to a single rank and is 2025-03-17T18:45:23.1249969Z not divided among ranks. The partition is arbitrary and might not match the 2025-03-17T18:45:23.1250107Z the parameter registration or usage order. 2025-03-17T18:45:23.1250206Z 2025-03-17T18:45:23.1250299Z Arguments: 2025-03-17T18:45:23.1250507Z params (``Iterable``): an ``Iterable`` of :class:`torch.Tensor` s 2025-03-17T18:45:23.1250701Z or :class:`dict` s giving all parameters, which will be sharded 2025-03-17T18:45:23.1250810Z across ranks. 2025-03-17T18:45:23.1250898Z 2025-03-17T18:45:23.1251002Z Keyword Args: 2025-03-17T18:45:23.1251230Z optimizer_class (:class:`torch.nn.Optimizer`): the class of the local 2025-03-17T18:45:23.1251338Z optimizer. 2025-03-17T18:45:23.1251578Z process_group (``ProcessGroup``, optional): ``torch.distributed`` 2025-03-17T18:45:23.1251788Z ``ProcessGroup`` (default: ``dist.group.WORLD`` initialized by 2025-03-17T18:45:23.1251942Z :meth:`torch.distributed.init_process_group`). 2025-03-17T18:45:23.1252185Z parameters_as_bucket_view (bool, optional): if ``True``, parameters are 2025-03-17T18:45:23.1252408Z packed into buckets to speed up communication, and ``param.data`` 2025-03-17T18:45:23.1252621Z fields point to bucket views at different offsets; if ``False``, 2025-03-17T18:45:23.1252832Z each individual parameter is communicated separately, and each 2025-03-17T18:45:23.1253000Z ``params.data`` stays intact (default: ``False``). 2025-03-17T18:45:23.1253196Z overlap_with_ddp (bool, optional): if ``True``, :meth:`step` is 2025-03-17T18:45:23.1253409Z overlapped with :class:`DistributedDataParallel` 's gradient 2025-03-17T18:45:23.1253629Z synchronization; this requires (1) either a functional optimizer 2025-03-17T18:45:23.1253827Z for the ``optimizer_class`` argument or one with a functional 2025-03-17T18:45:23.1254007Z equivalent and (2) registering a DDP communication hook 2025-03-17T18:45:23.1254221Z constructed from one of the functions in ``ddp_zero_hook.py``; 2025-03-17T18:45:23.1254394Z parameters are packed into buckets matching those in 2025-03-17T18:45:23.1254565Z :class:`DistributedDataParallel`, meaning that the 2025-03-17T18:45:23.1254720Z ``parameters_as_bucket_view`` argument is ignored. 2025-03-17T18:45:23.1254947Z If ``False``, :meth:`step` runs disjointly after the backward pass 2025-03-17T18:45:23.1255073Z (per normal). 2025-03-17T18:45:23.1255186Z (default: ``False``) 2025-03-17T18:45:23.1255400Z **defaults: any trailing arguments, which are forwarded to the local 2025-03-17T18:45:23.1255505Z optimizer. 2025-03-17T18:45:23.1255592Z 2025-03-17T18:45:23.1255686Z Example:: 2025-03-17T18:45:23.1255781Z 2025-03-17T18:45:23.1255882Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1256000Z >>> import torch.nn as nn 2025-03-17T18:45:23.1256204Z >>> from torch.distributed.optim import ZeroRedundancyOptimizer 2025-03-17T18:45:23.1256445Z >>> from torch.nn.parallel import DistributedDataParallel as DDP 2025-03-17T18:45:23.1256676Z >>> model = nn.Sequential(*[nn.Linear(2000, 2000).to(rank) for _ in range(20)]) 2025-03-17T18:45:23.1256809Z >>> ddp = DDP(model, device_ids=[rank]) 2025-03-17T18:45:23.1256935Z >>> opt = ZeroRedundancyOptimizer( 2025-03-17T18:45:23.1257058Z >>> ddp.parameters(), 2025-03-17T18:45:23.1257184Z >>> optimizer_class=torch.optim.Adam, 2025-03-17T18:45:23.1257285Z >>> lr=0.01 2025-03-17T18:45:23.1257367Z >>> ) 2025-03-17T18:45:23.1257493Z >>> ddp(inputs).sum().backward() 2025-03-17T18:45:23.1257586Z >>> opt.step() 2025-03-17T18:45:23.1257686Z 2025-03-17T18:45:23.1257776Z .. warning:: 2025-03-17T18:45:23.1258003Z Currently, ``ZeroRedundancyOptimizer`` requires that all of the 2025-03-17T18:45:23.1258151Z passed-in parameters are the same dense type. 2025-03-17T18:45:23.1258250Z 2025-03-17T18:45:23.1258337Z .. warning:: 2025-03-17T18:45:23.1258565Z If you pass ``overlap_with_ddp=True``, be wary of the following: Given 2025-03-17T18:45:23.1258767Z the way that overlapping :class:`DistributedDataParallel` with 2025-03-17T18:45:23.1259019Z :class:`ZeroRedundancyOptimizer` is currently implemented, the first 2025-03-17T18:45:23.1259243Z two or three training iterations do not perform parameter updates in 2025-03-17T18:45:23.1259450Z the optimizer step, depending on if ``static_graph=False`` or 2025-03-17T18:45:23.1259636Z ``static_graph=True``, respectively. This is because it needs 2025-03-17T18:45:23.1259865Z information about the gradient bucketing strategy used by 2025-03-17T18:45:23.1260089Z :class:`DistributedDataParallel`, which is not finalized until the 2025-03-17T18:45:23.1260306Z second forward pass if ``static_graph=False`` or until the third 2025-03-17T18:45:23.1260522Z forward pass if ``static_graph=True``. To adjust for this, one option 2025-03-17T18:45:23.1260652Z is to prepend dummy inputs. 2025-03-17T18:45:23.1260736Z 2025-03-17T18:45:23.1261007Z .. warning:: ZeroRedundancyOptimizer is experimental and subject to change. 2025-03-17T18:45:23.1261092Z 2025-03-17T18:45:23.1261243Z .. _ZeRO: https://arxiv.org/abs/1910.02054 2025-03-17T18:45:23.1261327Z 2025-03-17T18:45:23.1261424Z 2025-03-17T18:45:23.1261684Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1261780Z 2025-03-17T18:45:23.1261883Z warnings.warn(msg) 2025-03-17T18:45:23.1261981Z 2025-03-17T18:45:23.1262199Z --- Parse Warning: 74 / 116 --- 2025-03-17T18:45:23.1263215Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_CustomReducer in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/pipelining/microbatch.py line=28. 2025-03-17T18:45:23.1263486Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1263582Z 2025-03-17T18:45:23.1263816Z Custom reducer class that can be used to specify a custom operation that 2025-03-17T18:45:23.1264011Z reduces losses of multiple microbatches into one value. 2025-03-17T18:45:23.1264122Z 2025-03-17T18:45:23.1264224Z Example: 2025-03-17T18:45:23.1264351Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1264470Z >>> sum_reducer = _CustomReducer( 2025-03-17T18:45:23.1264588Z >>> torch.tensor(0.0), 2025-03-17T18:45:23.1264690Z >>> lambda a, b: a + b 2025-03-17T18:45:23.1264786Z >>> ) 2025-03-17T18:45:23.1264874Z 2025-03-17T18:45:23.1265142Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1265226Z 2025-03-17T18:45:23.1265343Z warnings.warn(msg) 2025-03-17T18:45:23.1265429Z 2025-03-17T18:45:23.1265674Z --- Parse Warning: 75 / 116 --- 2025-03-17T18:45:23.1266724Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=async_execution in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/functions.py line=6. 2025-03-17T18:45:23.1267011Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1267099Z 2025-03-17T18:45:23.1267357Z A decorator for a function indicating that the return value of the function 2025-03-17T18:45:23.1267574Z is guaranteed to be a :class:`~torch.futures.Future` object and this 2025-03-17T18:45:23.1267834Z function can run asynchronously on the RPC callee. More specifically, the 2025-03-17T18:45:23.1268079Z callee extracts the :class:`~torch.futures.Future` returned by the wrapped 2025-03-17T18:45:23.1268333Z function and installs subsequent processing steps as a callback to that 2025-03-17T18:45:23.1268574Z :class:`~torch.futures.Future`. The installed callback will read the value 2025-03-17T18:45:23.1268794Z from the :class:`~torch.futures.Future` when completed and send the 2025-03-17T18:45:23.1268982Z value back as the RPC response. That also means the returned 2025-03-17T18:45:23.1269226Z :class:`~torch.futures.Future` only exists on the callee side and is never 2025-03-17T18:45:23.1269462Z sent through RPC. This decorator is useful when the wrapped function's 2025-03-17T18:45:23.1269674Z (``fn``) execution needs to pause and resume due to, e.g., containing 2025-03-17T18:45:23.1269905Z :meth:`~torch.distributed.rpc.rpc_async` or waiting for other signals. 2025-03-17T18:45:23.1269999Z 2025-03-17T18:45:23.1270248Z .. note:: To enable asynchronous execution, applications must pass the 2025-03-17T18:45:23.1270496Z function object returned by this decorator to RPC APIs. If RPC detected 2025-03-17T18:45:23.1270723Z attributes installed by this decorator, it knows that this function 2025-03-17T18:45:23.1270922Z returns a ``Future`` object and will handle that accordingly. 2025-03-17T18:45:23.1271144Z However, this does not mean this decorator has to be outmost one when 2025-03-17T18:45:23.1271384Z defining a function. For example, when combined with ``@staticmethod`` 2025-03-17T18:45:23.1271606Z or ``@classmethod``, ``@rpc.functions.async_execution`` needs to be the 2025-03-17T18:45:23.1271844Z inner decorator to allow the target function be recognized as a static 2025-03-17T18:45:23.1272079Z or class function. This target function can still execute asynchronously 2025-03-17T18:45:23.1272320Z because, when accessed, the static or class method preserves attributes 2025-03-17T18:45:23.1272482Z installed by ``@rpc.functions.async_execution``. 2025-03-17T18:45:23.1272576Z 2025-03-17T18:45:23.1272661Z 2025-03-17T18:45:23.1272759Z Example:: 2025-03-17T18:45:23.1272970Z The returned :class:`~torch.futures.Future` object can come from 2025-03-17T18:45:23.1273120Z :meth:`~torch.distributed.rpc.rpc_async`, 2025-03-17T18:45:23.1273349Z :meth:`~torch.futures.Future.then`, or :class:`~torch.futures.Future` 2025-03-17T18:45:23.1273539Z constructor. The example below shows directly using the 2025-03-17T18:45:23.1273706Z :class:`~torch.futures.Future` returned by 2025-03-17T18:45:23.1273868Z :meth:`~torch.futures.Future.then`. 2025-03-17T18:45:23.1273954Z 2025-03-17T18:45:23.1274089Z >>> from torch.distributed import rpc 2025-03-17T18:45:23.1274177Z >>> 2025-03-17T18:45:23.1274305Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:23.1274392Z >>> 2025-03-17T18:45:23.1274502Z >>> # On all workers 2025-03-17T18:45:23.1274619Z >>> @rpc.functions.async_execution 2025-03-17T18:45:23.1274750Z >>> def async_add_chained(to, x, y, z): 2025-03-17T18:45:23.1274950Z >>> # This function runs on "worker1" and returns immediately when 2025-03-17T18:45:23.1275183Z >>> # the callback is installed through the `then(cb)` API. In the 2025-03-17T18:45:23.1275373Z >>> # mean time, the `rpc_async` to "worker2" can run concurrently. 2025-03-17T18:45:23.1275549Z >>> # When the return value of that `rpc_async` arrives at 2025-03-17T18:45:23.1275743Z >>> # "worker1", "worker1" will run the lambda function accordingly 2025-03-17T18:45:23.1275947Z >>> # and set the value for the previously returned `Future`, which 2025-03-17T18:45:23.1276133Z >>> # will then trigger RPC to send the result back to "worker0". 2025-03-17T18:45:23.1276321Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:23.1276439Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:23.1276539Z >>> ) 2025-03-17T18:45:23.1276623Z >>> 2025-03-17T18:45:23.1276728Z >>> # On worker0 2025-03-17T18:45:23.1276825Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1276945Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:23.1277039Z >>> "worker1", 2025-03-17T18:45:23.1277153Z >>> async_add_chained, 2025-03-17T18:45:23.1277280Z >>> args=("worker2", torch.ones(2), 1, 1) 2025-03-17T18:45:23.1277377Z >>> ) 2025-03-17T18:45:23.1277500Z >>> print(ret) # prints tensor([3., 3.]) 2025-03-17T18:45:23.1277582Z 2025-03-17T18:45:23.1277829Z When combined with TorchScript decorators, this decorator must be the 2025-03-17T18:45:23.1277925Z outmost one. 2025-03-17T18:45:23.1278016Z 2025-03-17T18:45:23.1278124Z >>> from torch import Tensor 2025-03-17T18:45:23.1278278Z >>> from torch.futures import Future 2025-03-17T18:45:23.1278405Z >>> from torch.distributed import rpc 2025-03-17T18:45:23.1278502Z >>> 2025-03-17T18:45:23.1278623Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:23.1278716Z >>> 2025-03-17T18:45:23.1278820Z >>> # On all workers 2025-03-17T18:45:23.1278926Z >>> @torch.jit.script 2025-03-17T18:45:23.1279077Z >>> def script_add(x: Tensor, y: Tensor) -> Tensor: 2025-03-17T18:45:23.1279186Z >>> return x + y 2025-03-17T18:45:23.1279274Z >>> 2025-03-17T18:45:23.1279401Z >>> @rpc.functions.async_execution 2025-03-17T18:45:23.1279506Z >>> @torch.jit.script 2025-03-17T18:45:23.1279712Z >>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]: 2025-03-17T18:45:23.1279856Z >>> return rpc.rpc_async(to, script_add, (x, y)) 2025-03-17T18:45:23.1279951Z >>> 2025-03-17T18:45:23.1280045Z >>> # On worker0 2025-03-17T18:45:23.1280161Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:23.1280257Z >>> "worker1", 2025-03-17T18:45:23.1280362Z >>> async_add, 2025-03-17T18:45:23.1280481Z >>> args=("worker2", torch.ones(2), 1) 2025-03-17T18:45:23.1280580Z >>> ) 2025-03-17T18:45:23.1280701Z >>> print(ret) # prints tensor([2., 2.]) 2025-03-17T18:45:23.1280796Z 2025-03-17T18:45:23.1281020Z When combined with static or class method, this decorator must be the 2025-03-17T18:45:23.1281110Z inner one. 2025-03-17T18:45:23.1281204Z 2025-03-17T18:45:23.1281328Z >>> from torch.distributed import rpc 2025-03-17T18:45:23.1281449Z >>> 2025-03-17T18:45:23.1281597Z >>> # omitting setup and shutdown RPC 2025-03-17T18:45:23.1281693Z >>> 2025-03-17T18:45:23.1281792Z >>> # On all workers 2025-03-17T18:45:23.1281917Z >>> class AsyncExecutionClass: 2025-03-17T18:45:23.1282004Z >>> 2025-03-17T18:45:23.1282116Z >>> @staticmethod 2025-03-17T18:45:23.1282238Z >>> @rpc.functions.async_execution 2025-03-17T18:45:23.1282365Z >>> def static_async_add(to, x, y, z): 2025-03-17T18:45:23.1282542Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:23.1282692Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:23.1282780Z >>> ) 2025-03-17T18:45:23.1282872Z >>> 2025-03-17T18:45:23.1282972Z >>> @classmethod 2025-03-17T18:45:23.1283102Z >>> @rpc.functions.async_execution 2025-03-17T18:45:23.1283229Z >>> def class_async_add(cls, to, x, y, z): 2025-03-17T18:45:23.1283368Z >>> ret_fut = torch.futures.Future() 2025-03-17T18:45:23.1283522Z >>> rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:23.1283682Z >>> lambda fut: ret_fut.set_result(fut.wait() + z) 2025-03-17T18:45:23.1283770Z >>> ) 2025-03-17T18:45:23.1283880Z >>> return ret_fut 2025-03-17T18:45:23.1283967Z >>> 2025-03-17T18:45:23.1284098Z >>> @rpc.functions.async_execution 2025-03-17T18:45:23.1284222Z >>> def bound_async_add(self, to, x, y, z): 2025-03-17T18:45:23.1284403Z >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then( 2025-03-17T18:45:23.1284524Z >>> lambda fut: fut.wait() + z 2025-03-17T18:45:23.1284624Z >>> ) 2025-03-17T18:45:23.1284710Z >>> 2025-03-17T18:45:23.1284814Z >>> # On worker0 2025-03-17T18:45:23.1284915Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:23.1285016Z >>> "worker1", 2025-03-17T18:45:23.1285164Z >>> AsyncExecutionClass.static_async_add, 2025-03-17T18:45:23.1285285Z >>> args=("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:23.1285382Z >>> ) 2025-03-17T18:45:23.1285504Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:23.1285597Z >>> 2025-03-17T18:45:23.1285724Z >>> ret = rpc.rpc_sync( 2025-03-17T18:45:23.1285832Z >>> "worker1", 2025-03-17T18:45:23.1285970Z >>> AsyncExecutionClass.class_async_add, 2025-03-17T18:45:23.1286099Z >>> args=("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:23.1286189Z >>> ) 2025-03-17T18:45:23.1286322Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:23.1286407Z 2025-03-17T18:45:23.1286579Z This decorator also works with RRef helpers, i.e., . 2025-03-17T18:45:23.1286726Z :meth:`torch.distributed.rpc.RRef.rpc_sync`, 2025-03-17T18:45:23.1286897Z :meth:`torch.distributed.rpc.RRef.rpc_async`, and 2025-03-17T18:45:23.1287044Z :meth:`torch.distributed.rpc.RRef.remote`. 2025-03-17T18:45:23.1287139Z 2025-03-17T18:45:23.1287264Z >>> from torch.distributed import rpc 2025-03-17T18:45:23.1287360Z >>> 2025-03-17T18:45:23.1287502Z >>> # reuse the AsyncExecutionClass class above 2025-03-17T18:45:23.1287672Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:23.1287887Z >>> ret = rref.rpc_sync().static_async_add("worker2", torch.ones(2), 1, 2) 2025-03-17T18:45:23.1288016Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:23.1288101Z >>> 2025-03-17T18:45:23.1288270Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:23.1288507Z >>> ret = rref.rpc_async().static_async_add("worker2", torch.ones(2), 1, 2).wait() 2025-03-17T18:45:23.1288642Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:23.1288753Z >>> 2025-03-17T18:45:23.1288910Z >>> rref = rpc.remote("worker1", AsyncExecutionClass) 2025-03-17T18:45:23.1289174Z >>> ret = rref.remote().static_async_add("worker2", torch.ones(2), 1, 2).to_here() 2025-03-17T18:45:23.1289305Z >>> print(ret) # prints tensor([4., 4.]) 2025-03-17T18:45:23.1289388Z 2025-03-17T18:45:23.1289654Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1289737Z 2025-03-17T18:45:23.1289849Z warnings.warn(msg) 2025-03-17T18:45:23.1289932Z 2025-03-17T18:45:23.1290151Z --- Parse Warning: 76 / 116 --- 2025-03-17T18:45:23.1291243Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=TensorPipeRpcBackendOptions.set_device_map in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/options.py line=108. 2025-03-17T18:45:23.1291558Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1291646Z 2025-03-17T18:45:23.1291872Z Set device mapping between each RPC caller and callee pair. This 2025-03-17T18:45:23.1292061Z function can be called multiple times to incrementally add 2025-03-17T18:45:23.1292190Z device placement configurations. 2025-03-17T18:45:23.1292275Z 2025-03-17T18:45:23.1292371Z Args: 2025-03-17T18:45:23.1292474Z to (str): Callee name. 2025-03-17T18:45:23.1292685Z device_map (Dict of int, str, or torch.device): Device placement 2025-03-17T18:45:23.1292871Z mappings from this worker to the callee. This map must be 2025-03-17T18:45:23.1292971Z invertible. 2025-03-17T18:45:23.1293055Z 2025-03-17T18:45:23.1293148Z Example: 2025-03-17T18:45:23.1293278Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1293375Z >>> # both workers 2025-03-17T18:45:23.1293479Z >>> def add(x, y): 2025-03-17T18:45:23.1293623Z >>> print(x) # tensor([1., 1.], device='cuda:1') 2025-03-17T18:45:23.1293740Z >>> return x + y, (x + y).to(2) 2025-03-17T18:45:23.1293828Z >>> 2025-03-17T18:45:23.1293930Z >>> # on worker 0 2025-03-17T18:45:23.1294071Z >>> options = TensorPipeRpcBackendOptions( 2025-03-17T18:45:23.1294185Z >>> num_worker_threads=8, 2025-03-17T18:45:23.1294328Z >>> device_maps={"worker1": {0: 1}} 2025-03-17T18:45:23.1294470Z >>> # maps worker0's cuda:0 to worker1's cuda:1 2025-03-17T18:45:23.1294558Z >>> ) 2025-03-17T18:45:23.1294699Z >>> options.set_device_map("worker1", {1: 2}) 2025-03-17T18:45:23.1294830Z >>> # maps worker0's cuda:1 to worker1's cuda:2 2025-03-17T18:45:23.1294926Z >>> 2025-03-17T18:45:23.1295025Z >>> rpc.init_rpc( 2025-03-17T18:45:23.1295125Z >>> "worker0", 2025-03-17T18:45:23.1295215Z >>> rank=0, 2025-03-17T18:45:23.1295317Z >>> world_size=2, 2025-03-17T18:45:23.1295447Z >>> backend=rpc.BackendType.TENSORPIPE, 2025-03-17T18:45:23.1295569Z >>> rpc_backend_options=options 2025-03-17T18:45:23.1295660Z >>> ) 2025-03-17T18:45:23.1295754Z >>> 2025-03-17T18:45:23.1295853Z >>> x = torch.ones(2) 2025-03-17T18:45:23.1296023Z >>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1)) 2025-03-17T18:45:23.1296214Z >>> # The first argument will be moved to cuda:1 on worker1. When 2025-03-17T18:45:23.1296410Z >>> # sending the return value back, it will follow the invert of 2025-03-17T18:45:23.1296588Z >>> # the device map, and hence will be moved back to cuda:0 and 2025-03-17T18:45:23.1296691Z >>> # cuda:1 on worker0 2025-03-17T18:45:23.1296843Z >>> print(rets[0]) # tensor([2., 2.], device='cuda:0') 2025-03-17T18:45:23.1296997Z >>> print(rets[1]) # tensor([2., 2.], device='cuda:1') 2025-03-17T18:45:23.1297080Z 2025-03-17T18:45:23.1297341Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1297450Z 2025-03-17T18:45:23.1297550Z warnings.warn(msg) 2025-03-17T18:45:23.1297683Z 2025-03-17T18:45:23.1297874Z --- Parse Warning: 77 / 116 --- 2025-03-17T18:45:23.1299009Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=_server_process_global_profile in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/rpc/server_process_global_profiler.py line=19. 2025-03-17T18:45:23.1299284Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1299368Z 2025-03-17T18:45:23.1299599Z It has the same API as ``torch.autograd.profiler.profile`` class, 2025-03-17T18:45:23.1299893Z except that it enables profiling on all threads running RPC server request callbacks. 2025-03-17T18:45:23.1299975Z 2025-03-17T18:45:23.1300273Z Context manager that manages autograd profiler state and holds a summary of results. 2025-03-17T18:45:23.1300513Z Under the hood it just records events of functions being executed in C++ and 2025-03-17T18:45:23.1300759Z exposes those events to Python. You can wrap any code into it and it will 2025-03-17T18:45:23.1300887Z only report runtime of PyTorch functions. 2025-03-17T18:45:23.1301169Z Note: profiler is thread local and is automatically propagated into the async tasks 2025-03-17T18:45:23.1301257Z 2025-03-17T18:45:23.1301346Z Args: 2025-03-17T18:45:23.1301621Z enabled (bool, optional): Setting this to False makes this context manager a no-op. 2025-03-17T18:45:23.1301725Z Default: ``True``. 2025-03-17T18:45:23.1301812Z 2025-03-17T18:45:23.1302105Z use_cuda (bool, optional): Enables timing of CUDA events as well using the cudaEvent API. 2025-03-17T18:45:23.1302307Z Adds approximately 4us of overhead to each tensor operation. 2025-03-17T18:45:23.1302408Z Default: ``False`` 2025-03-17T18:45:23.1302494Z 2025-03-17T18:45:23.1302733Z record_shapes (bool, optional): If shapes recording is set, information 2025-03-17T18:45:23.1302966Z about input dimensions will be collected. This allows one to see which 2025-03-17T18:45:23.1303190Z dimensions have been used under the hood and further group by them 2025-03-17T18:45:23.1303433Z using prof.key_averages(group_by_input_shape=True). Please note that 2025-03-17T18:45:23.1303672Z shape recording might skew your profiling data. It is recommended to 2025-03-17T18:45:23.1303913Z use separate runs with and without shape recording to validate the timing. 2025-03-17T18:45:23.1304151Z Most likely the skew will be negligible for bottom most events (in a case 2025-03-17T18:45:23.1304370Z of nested function calls). But for higher level functions the total 2025-03-17T18:45:23.1304583Z self cpu time might be artificially increased because of the shape 2025-03-17T18:45:23.1304678Z collection. 2025-03-17T18:45:23.1304767Z 2025-03-17T18:45:23.1305041Z profile_memory (bool, optional): Whether to report memory usage, default: ``False`` 2025-03-17T18:45:23.1305132Z 2025-03-17T18:45:23.1305218Z .. warning: 2025-03-17T18:45:23.1305435Z Enabling memory profiling incurs additional profiler overhead 2025-03-17T18:45:23.1305519Z 2025-03-17T18:45:23.1305613Z .. warning: 2025-03-17T18:45:23.1305864Z Due to some CUDA multiprocessing limitations (multiprocessing-cuda-note_), 2025-03-17T18:45:23.1306071Z one cannot use the profiler with ``use_cuda = True`` to benchmark 2025-03-17T18:45:23.1306313Z DataLoaders with ``num_workers > 0``. If you wish to benchmark data loading, 2025-03-17T18:45:23.1306579Z please use ``use_cuda = False`` or ``num_workers = 0``. 2025-03-17T18:45:23.1306666Z 2025-03-17T18:45:23.1306766Z Example: 2025-03-17T18:45:23.1306869Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1307017Z >>> # On worker 0: 2025-03-17T18:45:23.1307115Z >>> import torch 2025-03-17T18:45:23.1307300Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1307446Z >>> rpc.init_rpc("worker0", rank=0, world_size=2) 2025-03-17T18:45:23.1307582Z >>> x, y = torch.tensor(1), torch.tensor(2) 2025-03-17T18:45:23.1307704Z >>> outer_profile_rref = rpc.remote( 2025-03-17T18:45:23.1307879Z ... dst_worker_name, rpc._server_process_global_profile 2025-03-17T18:45:23.1307966Z ... ) 2025-03-17T18:45:23.1308107Z >>> outer_profile_rref.rpc_sync().__enter__() 2025-03-17T18:45:23.1308258Z >>> rpc.rpc_sync(dst_worker_name, torch.add, (x, y)) 2025-03-17T18:45:23.1308416Z >>> inner_profile_rref = rpc.remote( 2025-03-17T18:45:23.1308580Z ... dst_worker_name, rpc._server_process_global_profile 2025-03-17T18:45:23.1308673Z ... ) 2025-03-17T18:45:23.1308803Z >>> inner_profile_rref.rpc_sync().__enter__() 2025-03-17T18:45:23.1308960Z >>> rpc.rpc_sync(dst_worker_name, torch.sub, (x, y)) 2025-03-17T18:45:23.1309138Z >>> inner_profile_rref.rpc_sync().__exit__(None, None, None) 2025-03-17T18:45:23.1309319Z >>> outer_profile_rref.rpc_sync().__exit__(None, None, None) 2025-03-17T18:45:23.1309483Z >>> print(inner_profile_rref.rpc_sync().key_averages()) 2025-03-17T18:45:23.1309728Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1310047Z Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls 2025-03-17T18:45:23.1310284Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1310476Z sub 85.06% 76.275us 100.00% 89.667us 89.667us 1 2025-03-17T18:45:23.1310675Z empty 14.94% 13.392us 14.94% 13.392us 13.392us 1 2025-03-17T18:45:23.1310904Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1311024Z Self CPU time total: 89.667us 2025-03-17T18:45:23.1311188Z >>> print(outer_profile_rref.rpc_sync().key_averages()) 2025-03-17T18:45:23.1311445Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1311757Z Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg Number of Calls 2025-03-17T18:45:23.1311992Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1312169Z sub 35.65% 76.275us 41.91% 89.667us 89.667us 1 2025-03-17T18:45:23.1312367Z empty 12.67% 27.101us 12.67% 27.101us 13.551us 2 2025-03-17T18:45:23.1312552Z add 51.68% 110.550us 58.09% 124.259us 124.259us 1 2025-03-17T18:45:23.1312791Z --------- --------------- --------------- --------------- --------------- --------------- --------------- 2025-03-17T18:45:23.1312905Z Self CPU time total: 213.926us 2025-03-17T18:45:23.1313011Z >>> rpc.shutdown() 2025-03-17T18:45:23.1313095Z 2025-03-17T18:45:23.1313199Z >>> # On worker 1: 2025-03-17T18:45:23.1313331Z >>> import torch.distributed.rpc as rpc 2025-03-17T18:45:23.1313482Z >>> rpc.init_rpc("worker1", rank=1, world_size=2) 2025-03-17T18:45:23.1313647Z >>> # wait for worker 0 to finish work, and then shutdown. 2025-03-17T18:45:23.1313754Z >>> rpc.shutdown() 2025-03-17T18:45:23.1313836Z 2025-03-17T18:45:23.1314105Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1314215Z 2025-03-17T18:45:23.1314323Z warnings.warn(msg) 2025-03-17T18:45:23.1314430Z 2025-03-17T18:45:23.1314644Z --- Parse Warning: 78 / 116 --- 2025-03-17T18:45:23.1315662Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=local_map in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/experimental/_func_map.py line=33. 2025-03-17T18:45:23.1315937Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1316019Z 2025-03-17T18:45:23.1316297Z :meth:`local_map` is an experimental API that allows users to pass :class:`DTensor` s 2025-03-17T18:45:23.1316607Z to a function that is written to be applied on ``torch.Tensor`` s. It is done by extracting 2025-03-17T18:45:23.1316886Z the local components of :class:`DTensor`, call the function, and wrap the outputs to 2025-03-17T18:45:23.1317050Z :class:`DTensor` according to the ``out_placements``. 2025-03-17T18:45:23.1317143Z 2025-03-17T18:45:23.1317231Z Args: 2025-03-17T18:45:23.1317453Z func (Callable): the function to be applied on each local shard of 2025-03-17T18:45:23.1317550Z :class:`DTensor` s. 2025-03-17T18:45:23.1317795Z out_placements (Union[`PlacementType`, Tuple[`PlacementType`, ...]]): 2025-03-17T18:45:23.1318057Z the desired placements of the :class:`DTensor` s in ``func``'s flattened output. 2025-03-17T18:45:23.1318318Z If the flattened ``output`` is a single value, the ``out_placements`` should be 2025-03-17T18:45:23.1318565Z of type `PlacementType`. Otherwise if the flattened ``output`` has multiple 2025-03-17T18:45:23.1318833Z values, the ``out_placements`` should be a tuple of `PlacementType` values 1:1 2025-03-17T18:45:23.1318958Z mapping to the flattened ``output``. 2025-03-17T18:45:23.1319175Z Besides, for :class:`Tensor` output, we use `PlacementType` as its 2025-03-17T18:45:23.1319456Z placements (a `Tuple[Placement]` value). For non-Tensor output, the `PlacementType` 2025-03-17T18:45:23.1319562Z should be `None`. 2025-03-17T18:45:23.1319805Z Note that the only exception is when no :class:`DTensor` argument is passed 2025-03-17T18:45:23.1320064Z in. In this case, even if `out_placements` is not `None`, the result function 2025-03-17T18:45:23.1320325Z should ignore the desired placements because the function is not running with 2025-03-17T18:45:23.1320434Z :class:`DTensor` s. 2025-03-17T18:45:23.1320600Z in_placements (Tuple[`PlacementType`, ...], optional): 2025-03-17T18:45:23.1320898Z the required placements of the :class:`DTensor` s in the flattened inputs of ``func``. 2025-03-17T18:45:23.1321135Z If ``in_placements`` is specified, :meth:`local_map` would examine whether the 2025-03-17T18:45:23.1321375Z placements of each :class:`DTensor` argument is the same as the required 2025-03-17T18:45:23.1321563Z placements or not. If the placements are not the same and 2025-03-17T18:45:23.1321821Z ``redistribute_inputs`` is ``False``, an exception will be raised. Otherwise if 2025-03-17T18:45:23.1322067Z ``redistribute_inputs`` is ``True``, the argument will be first redistributed to 2025-03-17T18:45:23.1322337Z the required sharding placements before passing its local tensor to ``func``. 2025-03-17T18:45:23.1322564Z The only exception is when required placements are not ``None`` and the 2025-03-17T18:45:23.1322815Z argument is a :class:`torch.Tensor`. In this case, the placements examination 2025-03-17T18:45:23.1323038Z will be skipped and the argument will be directly passed to ``func``. 2025-03-17T18:45:23.1323274Z If ``in_placements`` is ``None``, no placements examination will be performed. 2025-03-17T18:45:23.1323398Z Default: None 2025-03-17T18:45:23.1323545Z device_mesh (:class:`DeviceMesh`, optional): 2025-03-17T18:45:23.1323786Z the device mesh that all the :class:`DTensor` s are placed on. If not 2025-03-17T18:45:23.1324031Z specified, this will be inferred from the input :class:`DTensor` s' device 2025-03-17T18:45:23.1324273Z mesh. `local_map` requires every :class:`DTensor` s to be placed on the same 2025-03-17T18:45:23.1324394Z device mesh. Default: None. 2025-03-17T18:45:23.1324519Z redistribute_inputs (bool, optional): 2025-03-17T18:45:23.1324784Z the bool value indicating whether to reshard the input :class:`DTensor` s when 2025-03-17T18:45:23.1325064Z their placements are different from the required input placements. If this 2025-03-17T18:45:23.1325302Z value is ``False`` and some :class:`DTensor` input has a different placement, 2025-03-17T18:45:23.1325445Z an exception will be raised. Default: False. 2025-03-17T18:45:23.1325537Z 2025-03-17T18:45:23.1325624Z Returns: 2025-03-17T18:45:23.1325902Z A ``Callable`` that applies ``func`` to each local shard of the input :class:`DTensor` 2025-03-17T18:45:23.1326145Z and returns a :class:`DTensor` constructed from the return value of ``func``. 2025-03-17T18:45:23.1326239Z 2025-03-17T18:45:23.1326327Z Raises: 2025-03-17T18:45:23.1326594Z AssertionError: If the input :class:`DTensor` is not placed on the same device 2025-03-17T18:45:23.1326837Z mesh, or if they are placed on a different device mesh than the ``device_mesh`` 2025-03-17T18:45:23.1326952Z argument passed in. 2025-03-17T18:45:23.1327039Z 2025-03-17T18:45:23.1327292Z AssertionError: For any non-DTensor output, we require its corresponding 2025-03-17T18:45:23.1327558Z output placement in ``out_placements`` be None. An AssertionError will be raised 2025-03-17T18:45:23.1327672Z if this is not the case. 2025-03-17T18:45:23.1327758Z 2025-03-17T18:45:23.1328037Z ValueError: If ``redistribute_inputs=False`` but the input :class:`DTensor` needs 2025-03-17T18:45:23.1328194Z a redistribution according to ``in_placements``. 2025-03-17T18:45:23.1328284Z 2025-03-17T18:45:23.1328372Z Example: 2025-03-17T18:45:23.1328499Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1328662Z >>> def mm_allreduce_forward(device_mesh, W, X): 2025-03-17T18:45:23.1328794Z >>> partial_sum_tensor = torch.mm(W, X) 2025-03-17T18:45:23.1329038Z >>> reduced_tensor = funcol.all_reduce(partial_sum_tensor, "sum", device_mesh) 2025-03-17T18:45:23.1329153Z >>> return reduced_tensor 2025-03-17T18:45:23.1329242Z >>> 2025-03-17T18:45:23.1329372Z >>> W = torch.randn(12, 8, requires_grad=False) 2025-03-17T18:45:23.1329515Z >>> X = torch.randn(8, 16, requires_grad=False) 2025-03-17T18:45:23.1329612Z >>> Y = torch.mm(W, X) 2025-03-17T18:45:23.1329820Z >>> row_wise = [Shard(0)] # row-wise sharding placements on 1-d mesh 2025-03-17T18:45:23.1330008Z >>> col_wise = [Shard(1)] # col-wise sharding placements on 1-d mesh 2025-03-17T18:45:23.1330103Z >>> 2025-03-17T18:45:23.1330378Z >>> # local_mm_allreduce_forward is the function wrapped with DTensor/Tensor convertion 2025-03-17T18:45:23.1330519Z >>> local_mm_allreduce_forward = local_map( 2025-03-17T18:45:23.1330625Z >>> mm_allreduce_forward, 2025-03-17T18:45:23.1330757Z >>> out_placements=[Replicate()], 2025-03-17T18:45:23.1330878Z >>> in_placements=[col_wise, row_wise], 2025-03-17T18:45:23.1330999Z >>> device_mesh=device_mesh, 2025-03-17T18:45:23.1331082Z >>> ) 2025-03-17T18:45:23.1331174Z >>> 2025-03-17T18:45:23.1331279Z >>> W_dt = distribute_tensor( 2025-03-17T18:45:23.1331400Z ... W, device_mesh, (col_wise) 2025-03-17T18:45:23.1331514Z ... ) # col-wisely sharded W tensor 2025-03-17T18:45:23.1331665Z >>> X_dt = distribute_tensor( 2025-03-17T18:45:23.1331798Z ... X, device_mesh, (row_wise) 2025-03-17T18:45:23.1331922Z ... ) # row-wisely sharded X tensor 2025-03-17T18:45:23.1332039Z >>> Y_dt = local_mm_allreduce_forward( 2025-03-17T18:45:23.1332151Z ... device_mesh, W_dt, X_dt 2025-03-17T18:45:23.1332299Z ... ) # apply local_mm_allreduce_forward to DTensors 2025-03-17T18:45:23.1332386Z 2025-03-17T18:45:23.1332588Z .. note:: This API is currently experimental and subject to change 2025-03-17T18:45:23.1332680Z 2025-03-17T18:45:23.1332936Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1333055Z 2025-03-17T18:45:23.1333152Z warnings.warn(msg) 2025-03-17T18:45:23.1333243Z 2025-03-17T18:45:23.1333439Z --- Parse Warning: 79 / 116 --- 2025-03-17T18:45:23.1334553Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=register_sharding in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/experimental/_register_sharding.py line=26. 2025-03-17T18:45:23.1334823Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1334915Z 2025-03-17T18:45:23.1335200Z :meth:`register_sharding` is an experimental API that allows users to register sharding 2025-03-17T18:45:23.1335455Z strategies for an operator when the tensor inputs and outputs are DTensor. 2025-03-17T18:45:23.1335710Z It can be useful when: (1) there doesn't exist a default sharding strategy for ``op``, 2025-03-17T18:45:23.1335965Z e.g. when ``op`` is a custom operator that is not supported by :class:`DTensor`; (2) 2025-03-17T18:45:23.1336246Z when users would like to overwrite default sharding strategies of existing operators. 2025-03-17T18:45:23.1336338Z 2025-03-17T18:45:23.1336425Z Args: 2025-03-17T18:45:23.1336566Z op (Union[OpOverload, List[OpOverload]]): 2025-03-17T18:45:23.1336914Z An op or a list of ops to register the customized sharding function. 2025-03-17T18:45:23.1337008Z 2025-03-17T18:45:23.1337095Z Returns: 2025-03-17T18:45:23.1337381Z A function decorator which can be used to wrap a function that defines the sharding 2025-03-17T18:45:23.1337716Z strategy for the operator specified in ``op``. The defined sharding strategy will be 2025-03-17T18:45:23.1338009Z registered to DTensor and will override the default sharding strategy if DTensor has 2025-03-17T18:45:23.1338316Z already implemented the operator. The customized sharding function takes the same inputs 2025-03-17T18:45:23.1338575Z as the original op (except that if an arg is a :class:`torch.Tensor`, it will be 2025-03-17T18:45:23.1338853Z replaced by a tensor-like object that DTensor uses internally). The function should 2025-03-17T18:45:23.1339142Z return a sequence of 2-tuples, each specifying acceptable output placements and its 2025-03-17T18:45:23.1339265Z corresponding intput placements. 2025-03-17T18:45:23.1339359Z 2025-03-17T18:45:23.1339449Z Example: 2025-03-17T18:45:23.1339578Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1339718Z >>> @register_sharding(aten._softmax.default) 2025-03-17T18:45:23.1339893Z >>> def custom_softmax_sharding(x, dim, half_to_float): 2025-03-17T18:45:23.1340041Z >>> softmax_dim = dim if dim >= 0 else dim + x.ndim 2025-03-17T18:45:23.1340165Z >>> acceptable_shardings = [] 2025-03-17T18:45:23.1340251Z >>> 2025-03-17T18:45:23.1340438Z >>> all_replicate = ([Replicate()], [Replicate(), None, None]) 2025-03-17T18:45:23.1340586Z >>> acceptable_shardings.append(all_replicate) 2025-03-17T18:45:23.1340677Z >>> 2025-03-17T18:45:23.1340801Z >>> for sharding_dim in range(x.ndim): 2025-03-17T18:45:23.1340952Z >>> if sharding_dim != softmax_dim: 2025-03-17T18:45:23.1341102Z >>> all_sharded = ( 2025-03-17T18:45:23.1341216Z >>> [Shard(sharding_dim)], 2025-03-17T18:45:23.1341357Z >>> [Shard(sharding_dim), None, None], 2025-03-17T18:45:23.1341446Z >>> ) 2025-03-17T18:45:23.1341601Z >>> acceptable_shardings.append(all_sharded) 2025-03-17T18:45:23.1341682Z >>> 2025-03-17T18:45:23.1341811Z >>> return acceptable_shardings 2025-03-17T18:45:23.1341892Z 2025-03-17T18:45:23.1342103Z .. note:: This API is currently experimental and subject to change 2025-03-17T18:45:23.1342219Z 2025-03-17T18:45:23.1342488Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1342569Z 2025-03-17T18:45:23.1342680Z warnings.warn(msg) 2025-03-17T18:45:23.1342760Z 2025-03-17T18:45:23.1342972Z --- Parse Warning: 80 / 116 --- 2025-03-17T18:45:23.1344012Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=PrepareModuleInput in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py line=403. 2025-03-17T18:45:23.1344287Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1344366Z 2025-03-17T18:45:23.1344771Z Configure the nn.Module's inputs to convert the input tensors of the nn.Module to DTensors at runtime according to 2025-03-17T18:45:23.1345094Z ``input_layouts``, and perform layout redistribution according to the ``desired_input_layouts``. 2025-03-17T18:45:23.1345190Z 2025-03-17T18:45:23.1345279Z Keyword Args: 2025-03-17T18:45:23.1345494Z input_layouts (Union[Placement, Tuple[Optional[Placement]]]): 2025-03-17T18:45:23.1345826Z The DTensor layouts of input tensors for the nn.Module, this is used to convert the input tensors to 2025-03-17T18:45:23.1346205Z DTensors. If some inputs are not torch.Tensor or no need to convert to DTensors, ``None`` need to be specified 2025-03-17T18:45:23.1346323Z as a placeholder. default: None. 2025-03-17T18:45:23.1346648Z desired_input_layouts (Union[Placement, Tuple[Optional[Placement]]]): 2025-03-17T18:45:23.1347060Z The desired DTensor layout of input tensors for the nn.Module, this is used to ensure the inputs of the nn.Module 2025-03-17T18:45:23.1347474Z have the desired DTensor layouts. This argument needs to have the same length with ``input_layouts``. default: None. 2025-03-17T18:45:23.1347610Z input_kwarg_layouts (Dict[str, Placement]): 2025-03-17T18:45:23.1348001Z The DTensor layouts of input kwargs for the nn.Module, this is used to convert the input kwarg tensors to DTensors. 2025-03-17T18:45:23.1348097Z default: None 2025-03-17T18:45:23.1348273Z desired_input_kwarg_layouts: (Dict[str, Placement]): 2025-03-17T18:45:23.1348654Z The desired DTensor layout of input kwargs for the nn.Module, this is used to ensure the inputs of the nn.Module 2025-03-17T18:45:23.1348813Z have the desired DTensor layouts. default: None. 2025-03-17T18:45:23.1348934Z use_local_output (bool, optional): 2025-03-17T18:45:23.1349310Z Whether to use local :class:`torch.Tensor` instead of :class:`DTensor` for the module inputs, default: False. 2025-03-17T18:45:23.1349398Z Returns: 2025-03-17T18:45:23.1349734Z A :class:`ParallelStyle` object that prepares the sharding layouts of the nn.Module's inputs. 2025-03-17T18:45:23.1349819Z 2025-03-17T18:45:23.1349920Z Example:: 2025-03-17T18:45:23.1350032Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:23.1350357Z >>> from torch.distributed.tensor.parallel import parallelize_module, PrepareModuleInput 2025-03-17T18:45:23.1350557Z >>> from torch.distributed.device_mesh import init_device_mesh 2025-03-17T18:45:23.1350681Z >>> ... 2025-03-17T18:45:23.1351015Z >>> block = TransformerBlock(...) # block is a nn.Module that contains an "attn" Attention submodule 2025-03-17T18:45:23.1351159Z >>> tp_mesh = init_device_mesh("cuda", (8,)) 2025-03-17T18:45:23.1351245Z >>> 2025-03-17T18:45:23.1351594Z >>> # According to the style specified below, the first input of attn will be annotated to Sharded DTensor 2025-03-17T18:45:23.1351751Z >>> # and then redistributed to Replicated DTensor. 2025-03-17T18:45:23.1351862Z >>> parallelize_module( 2025-03-17T18:45:23.1352000Z >>> block, # this can be a submodule or module 2025-03-17T18:45:23.1352131Z >>> tp_mesh, 2025-03-17T18:45:23.1352238Z >>> parallelize_plan={ 2025-03-17T18:45:23.1352367Z >>> "attn": PrepareModuleInput( 2025-03-17T18:45:23.1352509Z >>> input_layouts=(Shard(0), None, None, ...), 2025-03-17T18:45:23.1352691Z >>> desired_input_layouts=(Replicate(), None, None, ...) 2025-03-17T18:45:23.1352784Z >>> ), 2025-03-17T18:45:23.1352877Z >>> } 2025-03-17T18:45:23.1352961Z >>> ) 2025-03-17T18:45:23.1353054Z 2025-03-17T18:45:23.1353314Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1353407Z 2025-03-17T18:45:23.1353509Z warnings.warn(msg) 2025-03-17T18:45:23.1353601Z 2025-03-17T18:45:23.1353798Z --- Parse Warning: 81 / 116 --- 2025-03-17T18:45:23.1354849Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=PrepareModuleOutput in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributed/tensor/parallel/style.py line=562. 2025-03-17T18:45:23.1355121Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1355215Z 2025-03-17T18:45:23.1355619Z Configure the nn.Module's outputs to convert the output tensors of the nn.Module to DTensors at runtime according to 2025-03-17T18:45:23.1355969Z ``output_layouts``, and perform layout redistribution according to the ``desired_output_layouts``. 2025-03-17T18:45:23.1356053Z 2025-03-17T18:45:23.1356155Z Keyword Args: 2025-03-17T18:45:23.1356351Z output_layouts (Union[Placement, Tuple[Placement]]): 2025-03-17T18:45:23.1356710Z The DTensor layouts of output tensors for the nn.Module, this is used to convert the output tensors to 2025-03-17T18:45:23.1357097Z DTensors if they are :class:`torch.Tensor`. If some outputs are not torch.Tensor or no need to convert to DTensors, 2025-03-17T18:45:23.1357258Z ``None`` need to be specified as a placeholder. 2025-03-17T18:45:23.1357460Z desired_output_layouts (Union[Placement, Tuple[Placement]]): 2025-03-17T18:45:23.1357870Z The desired DTensor layouts of output tensors for the nn.Module, this is used to ensure the outputs of the nn.Module 2025-03-17T18:45:23.1357997Z have the desired DTensor layouts. 2025-03-17T18:45:23.1358126Z use_local_output (bool, optional): 2025-03-17T18:45:23.1358494Z Whether to use local :class:`torch.Tensor` instead of :class:`DTensor` for the module outputs, default: True. 2025-03-17T18:45:23.1358594Z Returns: 2025-03-17T18:45:23.1358896Z A ParallelStyle object that prepares the sharding layouts of the nn.Module's outputs. 2025-03-17T18:45:23.1358991Z 2025-03-17T18:45:23.1359084Z Example:: 2025-03-17T18:45:23.1359209Z >>> # xdoctest: +SKIP(failing) 2025-03-17T18:45:23.1359534Z >>> from torch.distributed.tensor.parallel import parallelize_module, PrepareModuleOutput 2025-03-17T18:45:23.1359748Z >>> from torch.distributed.device_mesh import init_device_mesh 2025-03-17T18:45:23.1359836Z >>> ... 2025-03-17T18:45:23.1360157Z >>> block = TransformerBlock(...) # block is a nn.Module that contains an "attn" Attention submodule 2025-03-17T18:45:23.1360371Z >>> tp_mesh = init_device_mesh("cuda", (8,)) 2025-03-17T18:45:23.1360472Z >>> 2025-03-17T18:45:23.1360881Z >>> # According to the style specified below, the output of the TransformerBlock will be converted to Replicated DTensor 2025-03-17T18:45:23.1361036Z >>> # and then redistributed to Sharded DTensor. 2025-03-17T18:45:23.1361141Z >>> parallelize_module( 2025-03-17T18:45:23.1361293Z >>> block, # this can be a submodule or module 2025-03-17T18:45:23.1361388Z >>> tp_mesh, 2025-03-17T18:45:23.1361540Z >>> parallelize_plan = PrepareModuleOutput( 2025-03-17T18:45:23.1361691Z >>> output_layouts=Replicate(), 2025-03-17T18:45:23.1361826Z >>> desired_output_layouts=Shard(0) 2025-03-17T18:45:23.1361912Z >>> ) 2025-03-17T18:45:23.1362013Z >>> ) 2025-03-17T18:45:23.1362097Z 2025-03-17T18:45:23.1362370Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1362452Z 2025-03-17T18:45:23.1362554Z warnings.warn(msg) 2025-03-17T18:45:23.1362648Z 2025-03-17T18:45:23.1362837Z --- Parse Warning: 82 / 116 --- 2025-03-17T18:45:23.1363956Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=LowRankMultivariateNormal in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/lowrank_multivariate_normal.py line=55. 2025-03-17T18:45:23.1364236Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1364320Z 2025-03-17T18:45:23.1364634Z Creates a multivariate normal distribution with covariance matrix having a low-rank form 2025-03-17T18:45:23.1364834Z parameterized by :attr:`cov_factor` and :attr:`cov_diag`:: 2025-03-17T18:45:23.1364918Z 2025-03-17T18:45:23.1365106Z covariance_matrix = cov_factor @ cov_factor.T + cov_diag 2025-03-17T18:45:23.1365191Z 2025-03-17T18:45:23.1365293Z Example: 2025-03-17T18:45:23.1365446Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_LAPACK) 2025-03-17T18:45:23.1365599Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:23.1365720Z >>> m = LowRankMultivariateNormal( 2025-03-17T18:45:23.1365950Z ... torch.zeros(2), torch.tensor([[1.0], [0.0]]), torch.ones(2) 2025-03-17T18:45:23.1366039Z ... ) 2025-03-17T18:45:23.1366344Z >>> m.sample() # normally distributed with mean=`[0,0]`, cov_factor=`[[1],[0]]`, cov_diag=`[1,1]` 2025-03-17T18:45:23.1366446Z tensor([-0.2102, -0.5429]) 2025-03-17T18:45:23.1366538Z 2025-03-17T18:45:23.1366624Z Args: 2025-03-17T18:45:23.1366875Z loc (Tensor): mean of the distribution with shape `batch_shape + event_shape` 2025-03-17T18:45:23.1367137Z cov_factor (Tensor): factor part of low-rank form of covariance matrix with shape 2025-03-17T18:45:23.1367275Z `batch_shape + event_shape + (rank,)` 2025-03-17T18:45:23.1367538Z cov_diag (Tensor): diagonal part of low-rank form of covariance matrix with shape 2025-03-17T18:45:23.1367662Z `batch_shape + event_shape` 2025-03-17T18:45:23.1367747Z 2025-03-17T18:45:23.1367842Z Note: 2025-03-17T18:45:23.1368117Z The computation for determinant and inverse of covariance matrix is avoided when 2025-03-17T18:45:23.1368373Z `cov_factor.shape[1] << cov_factor.shape[0]` thanks to `Woodbury matrix identity 2025-03-17T18:45:23.1368586Z `_ and 2025-03-17T18:45:23.1368892Z `matrix determinant lemma `_. 2025-03-17T18:45:23.1369150Z Thanks to these formulas, we just need to compute the determinant and inverse of 2025-03-17T18:45:23.1369287Z the small size "capacitance" matrix:: 2025-03-17T18:45:23.1369370Z 2025-03-17T18:45:23.1369593Z capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor 2025-03-17T18:45:23.1369677Z 2025-03-17T18:45:23.1369970Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1370056Z 2025-03-17T18:45:23.1370167Z warnings.warn(msg) 2025-03-17T18:45:23.1370249Z 2025-03-17T18:45:23.1370454Z --- Parse Warning: 83 / 116 --- 2025-03-17T18:45:23.1371473Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=MixtureSameFamily in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/mixture_same_family.py line=13. 2025-03-17T18:45:23.1371775Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1371860Z 2025-03-17T18:45:23.1372100Z The `MixtureSameFamily` distribution implements a (batch of) mixture 2025-03-17T18:45:23.1372355Z distribution where all component are from different parameterizations of 2025-03-17T18:45:23.1372589Z the same distribution type. It is parameterized by a `Categorical` 2025-03-17T18:45:23.1372790Z "selecting distribution" (over `k` component) and a component 2025-03-17T18:45:23.1373014Z distribution, i.e., a `Distribution` with a rightmost batch shape 2025-03-17T18:45:23.1373180Z (equal to `[k]`) which indexes each (batch of) component. 2025-03-17T18:45:23.1373274Z 2025-03-17T18:45:23.1373369Z Examples:: 2025-03-17T18:45:23.1373464Z 2025-03-17T18:45:23.1373588Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1373808Z >>> # Construct Gaussian Mixture Model in 1D consisting of 5 equally 2025-03-17T18:45:23.1373927Z >>> # weighted normal distributions 2025-03-17T18:45:23.1374060Z >>> mix = D.Categorical(torch.ones(5,)) 2025-03-17T18:45:23.1374209Z >>> comp = D.Normal(torch.randn(5,), torch.rand(5,)) 2025-03-17T18:45:23.1374342Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:23.1374431Z 2025-03-17T18:45:23.1374650Z >>> # Construct Gaussian Mixture Model in 2D consisting of 5 equally 2025-03-17T18:45:23.1374786Z >>> # weighted bivariate normal distributions 2025-03-17T18:45:23.1374919Z >>> mix = D.Categorical(torch.ones(5,)) 2025-03-17T18:45:23.1375035Z >>> comp = D.Independent(D.Normal( 2025-03-17T18:45:23.1375200Z ... torch.randn(5,2), torch.rand(5,2)), 1) 2025-03-17T18:45:23.1375325Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:23.1375417Z 2025-03-17T18:45:23.1375606Z >>> # Construct a batch of 3 Gaussian Mixture Models in 2D each 2025-03-17T18:45:23.1375816Z >>> # consisting of 5 random weighted bivariate normal distributions 2025-03-17T18:45:23.1375950Z >>> mix = D.Categorical(torch.rand(3,5)) 2025-03-17T18:45:23.1376063Z >>> comp = D.Independent(D.Normal( 2025-03-17T18:45:23.1376210Z ... torch.randn(3,5,2), torch.rand(3,5,2)), 1) 2025-03-17T18:45:23.1376336Z >>> gmm = MixtureSameFamily(mix, comp) 2025-03-17T18:45:23.1376431Z 2025-03-17T18:45:23.1376522Z Args: 2025-03-17T18:45:23.1376742Z mixture_distribution: `torch.distributions.Categorical`-like 2025-03-17T18:45:23.1376935Z instance. Manages the probability of selecting component. 2025-03-17T18:45:23.1377122Z The number of categories must match the rightmost batch 2025-03-17T18:45:23.1377313Z dimension of the `component_distribution`. Must have either 2025-03-17T18:45:23.1377467Z scalar `batch_shape` or `batch_shape` matching 2025-03-17T18:45:23.1377609Z `component_distribution.batch_shape[:-1]` 2025-03-17T18:45:23.1377847Z component_distribution: `torch.distributions.Distribution`-like 2025-03-17T18:45:23.1378028Z instance. Right-most batch dimension indexes component. 2025-03-17T18:45:23.1378123Z 2025-03-17T18:45:23.1378380Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1378505Z 2025-03-17T18:45:23.1378632Z warnings.warn(msg) 2025-03-17T18:45:23.1378730Z 2025-03-17T18:45:23.1378919Z --- Parse Warning: 84 / 116 --- 2025-03-17T18:45:23.1379940Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=RelaxedBernoulli in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/relaxed_bernoulli.py line=111. 2025-03-17T18:45:23.1380205Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1380304Z 2025-03-17T18:45:23.1380489Z Creates a RelaxedBernoulli distribution, parametrized by 2025-03-17T18:45:23.1380725Z :attr:`temperature`, and either :attr:`probs` or :attr:`logits` 2025-03-17T18:45:23.1380949Z (but not both). This is a relaxed version of the `Bernoulli` distribution, 2025-03-17T18:45:23.1381147Z so the values are in (0, 1), and has reparametrizable samples. 2025-03-17T18:45:23.1381233Z 2025-03-17T18:45:23.1381332Z Example:: 2025-03-17T18:45:23.1381416Z 2025-03-17T18:45:23.1381577Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:23.1381709Z >>> m = RelaxedBernoulli(torch.tensor([2.2]), 2025-03-17T18:45:23.1381851Z ... torch.tensor([0.1, 0.2, 0.3, 0.99])) 2025-03-17T18:45:23.1381942Z >>> m.sample() 2025-03-17T18:45:23.1382075Z tensor([ 0.2951, 0.3442, 0.8918, 0.9021]) 2025-03-17T18:45:23.1382156Z 2025-03-17T18:45:23.1382253Z Args: 2025-03-17T18:45:23.1382394Z temperature (Tensor): relaxation temperature 2025-03-17T18:45:23.1382586Z probs (Number, Tensor): the probability of sampling `1` 2025-03-17T18:45:23.1382747Z logits (Number, Tensor): the log-odds of sampling `1` 2025-03-17T18:45:23.1382843Z 2025-03-17T18:45:23.1383100Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1383197Z 2025-03-17T18:45:23.1383297Z warnings.warn(msg) 2025-03-17T18:45:23.1383390Z 2025-03-17T18:45:23.1383580Z --- Parse Warning: 85 / 116 --- 2025-03-17T18:45:23.1384677Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=RelaxedOneHotCategorical in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/distributions/relaxed_categorical.py line=101. 2025-03-17T18:45:23.1384947Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1385042Z 2025-03-17T18:45:23.1385261Z Creates a RelaxedOneHotCategorical distribution parametrized by 2025-03-17T18:45:23.1385480Z :attr:`temperature`, and either :attr:`probs` or :attr:`logits`. 2025-03-17T18:45:23.1385723Z This is a relaxed version of the :class:`OneHotCategorical` distribution, so 2025-03-17T18:45:23.1385901Z its samples are on simplex, and are reparametrizable. 2025-03-17T18:45:23.1385988Z 2025-03-17T18:45:23.1386088Z Example:: 2025-03-17T18:45:23.1386171Z 2025-03-17T18:45:23.1386314Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:23.1386552Z >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]), 2025-03-17T18:45:23.1386683Z ... torch.tensor([0.1, 0.2, 0.3, 0.4])) 2025-03-17T18:45:23.1386789Z >>> m.sample() 2025-03-17T18:45:23.1386910Z tensor([ 0.1294, 0.2324, 0.3859, 0.2523]) 2025-03-17T18:45:23.1387009Z 2025-03-17T18:45:23.1387098Z Args: 2025-03-17T18:45:23.1387253Z temperature (Tensor): relaxation temperature 2025-03-17T18:45:23.1387377Z probs (Tensor): event probabilities 2025-03-17T18:45:23.1387578Z logits (Tensor): unnormalized log probability for each event 2025-03-17T18:45:23.1387660Z 2025-03-17T18:45:23.1387928Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1388049Z 2025-03-17T18:45:23.1388159Z warnings.warn(msg) 2025-03-17T18:45:23.1388243Z 2025-03-17T18:45:23.1388475Z --- Parse Warning: 86 / 116 --- 2025-03-17T18:45:23.1389507Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assoc_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=245. 2025-03-17T18:45:23.1389790Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1389987Z Return a new dict with new, potentially nested, key value pair 2025-03-17T18:45:23.1390083Z 2025-03-17T18:45:23.1390206Z >>> purchase = { 2025-03-17T18:45:23.1390316Z ... "name": "Alice", 2025-03-17T18:45:23.1390496Z ... "order": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:23.1390627Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:23.1390717Z ... } 2025-03-17T18:45:23.1390940Z >>> assoc_in(purchase, ["order", "costs"], [0.25, 1.00]) # doctest: +SKIP 2025-03-17T18:45:23.1391054Z {'credit card': '5555-1234-1234-1234', 2025-03-17T18:45:23.1391162Z 'name': 'Alice', 2025-03-17T18:45:23.1391330Z 'order': {'costs': [0.25, 1.00], 'items': ['Apple', 'Orange']}} 2025-03-17T18:45:23.1391428Z 2025-03-17T18:45:23.1391686Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1391785Z 2025-03-17T18:45:23.1391888Z warnings.warn(msg) 2025-03-17T18:45:23.1391990Z 2025-03-17T18:45:23.1392176Z --- Parse Warning: 87 / 116 --- 2025-03-17T18:45:23.1393222Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=update_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=261. 2025-03-17T18:45:23.1393494Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1393662Z Update value in a (potentially) nested dictionary 2025-03-17T18:45:23.1393748Z 2025-03-17T18:45:23.1393848Z inputs: 2025-03-17T18:45:23.1393967Z d - dictionary on which to operate 2025-03-17T18:45:23.1394223Z keys - list or tuple giving the location of the value to be changed in d 2025-03-17T18:45:23.1394352Z func - function to operate on that value 2025-03-17T18:45:23.1394446Z 2025-03-17T18:45:23.1394642Z If keys == [k0,..,kX] and d[k0]..[kX] == v, update_in returns a copy of the 2025-03-17T18:45:23.1394890Z original dictionary with v replaced by func(v), but does not mutate the 2025-03-17T18:45:23.1395002Z original dictionary. 2025-03-17T18:45:23.1395096Z 2025-03-17T18:45:23.1395309Z If k0 is not a key in d, update_in creates nested dictionaries to the depth 2025-03-17T18:45:23.1395537Z specified by the keys, with the innermost value set to func(default). 2025-03-17T18:45:23.1395625Z 2025-03-17T18:45:23.1395743Z >>> inc = lambda x: x + 1 2025-03-17T18:45:23.1395853Z >>> update_in({"a": 0}, ["a"], inc) 2025-03-17T18:45:23.1395950Z {'a': 1} 2025-03-17T18:45:23.1396035Z 2025-03-17T18:45:23.1396138Z >>> transaction = { 2025-03-17T18:45:23.1396243Z ... "name": "Alice", 2025-03-17T18:45:23.1396438Z ... "purchase": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:23.1396571Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:23.1396657Z ... } 2025-03-17T18:45:23.1396883Z >>> update_in(transaction, ["purchase", "costs"], sum) # doctest: +SKIP 2025-03-17T18:45:23.1396998Z {'credit card': '5555-1234-1234-1234', 2025-03-17T18:45:23.1397100Z 'name': 'Alice', 2025-03-17T18:45:23.1397270Z 'purchase': {'costs': 1.75, 'items': ['Apple', 'Orange']}} 2025-03-17T18:45:23.1397390Z 2025-03-17T18:45:23.1397515Z >>> # updating a value when k0 is not in d 2025-03-17T18:45:23.1397685Z >>> update_in({}, [1, 2, 3], str, default="bar") 2025-03-17T18:45:23.1397784Z {1: {2: {3: 'bar'}}} 2025-03-17T18:45:23.1397910Z >>> update_in({1: "foo"}, [2, 3, 4], inc, 0) 2025-03-17T18:45:23.1398012Z {1: 'foo', 2: {3: {4: 1}}} 2025-03-17T18:45:23.1398109Z 2025-03-17T18:45:23.1398368Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1398458Z 2025-03-17T18:45:23.1398563Z warnings.warn(msg) 2025-03-17T18:45:23.1398655Z 2025-03-17T18:45:23.1398843Z --- Parse Warning: 88 / 116 --- 2025-03-17T18:45:23.1399916Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=get_in in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=320. 2025-03-17T18:45:23.1400190Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1400374Z Returns coll[i0][i1]...[iX] where [i0, i1, ..., iX]==keys. 2025-03-17T18:45:23.1400459Z 2025-03-17T18:45:23.1400648Z If coll[i0][i1]...[iX] cannot be found, returns ``default``, unless 2025-03-17T18:45:23.1400856Z ``no_default`` is specified, then it raises KeyError or IndexError. 2025-03-17T18:45:23.1400948Z 2025-03-17T18:45:23.1401159Z ``get_in`` is a generalization of ``operator.getitem`` for nested data 2025-03-17T18:45:23.1401304Z structures such as dictionaries and lists. 2025-03-17T18:45:23.1401388Z 2025-03-17T18:45:23.1401500Z >>> transaction = { 2025-03-17T18:45:23.1401601Z ... "name": "Alice", 2025-03-17T18:45:23.1401799Z ... "purchase": {"items": ["Apple", "Orange"], "costs": [0.50, 1.25]}, 2025-03-17T18:45:23.1401921Z ... "credit card": "5555-1234-1234-1234", 2025-03-17T18:45:23.1402020Z ... } 2025-03-17T18:45:23.1402160Z >>> get_in(["purchase", "items", 0], transaction) 2025-03-17T18:45:23.1402262Z 'Apple' 2025-03-17T18:45:23.1402374Z >>> get_in(["name"], transaction) 2025-03-17T18:45:23.1402472Z 'Alice' 2025-03-17T18:45:23.1402606Z >>> get_in(["purchase", "total"], transaction) 2025-03-17T18:45:23.1402793Z >>> get_in(["purchase", "items", "apple"], transaction) 2025-03-17T18:45:23.1402934Z >>> get_in(["purchase", "items", 10], transaction) 2025-03-17T18:45:23.1403087Z >>> get_in(["purchase", "total"], transaction, 0) 2025-03-17T18:45:23.1403174Z 0 2025-03-17T18:45:23.1403291Z >>> get_in(["y"], {}, no_default=True) 2025-03-17T18:45:23.1403421Z Traceback (most recent call last): 2025-03-17T18:45:23.1403511Z ... 2025-03-17T18:45:23.1403613Z KeyError: 'y' 2025-03-17T18:45:23.1403699Z 2025-03-17T18:45:23.1403796Z See Also: 2025-03-17T18:45:23.1403894Z itertoolz.get 2025-03-17T18:45:23.1404007Z operator.getitem 2025-03-17T18:45:23.1404094Z 2025-03-17T18:45:23.1404362Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1404446Z 2025-03-17T18:45:23.1404554Z warnings.warn(msg) 2025-03-17T18:45:23.1404639Z 2025-03-17T18:45:23.1404835Z --- Parse Warning: 89 / 116 --- 2025-03-17T18:45:23.1405872Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=groupby in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/unification/unification_tools.py line=373. 2025-03-17T18:45:23.1406150Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1406269Z Group a collection by a key function 2025-03-17T18:45:23.1406360Z 2025-03-17T18:45:23.1406528Z >>> names = ["Alice", "Bob", "Charlie", "Dan", "Edith", "Frank"] 2025-03-17T18:45:23.1406688Z >>> groupby(len, names) # doctest: +SKIP 2025-03-17T18:45:23.1406876Z {3: ['Bob', 'Dan'], 5: ['Alice', 'Edith', 'Frank'], 7: ['Charlie']} 2025-03-17T18:45:23.1406970Z 2025-03-17T18:45:23.1407081Z >>> iseven = lambda x: x % 2 == 0 2025-03-17T18:45:23.1407259Z >>> groupby(iseven, [1, 2, 3, 4, 5, 6, 7, 8]) # doctest: +SKIP 2025-03-17T18:45:23.1407378Z {False: [1, 3, 5, 7], True: [2, 4, 6, 8]} 2025-03-17T18:45:23.1407472Z 2025-03-17T18:45:23.1407619Z Non-callable keys imply grouping on a member. 2025-03-17T18:45:23.1407713Z 2025-03-17T18:45:23.1407804Z >>> groupby( 2025-03-17T18:45:23.1407908Z ... "gender", 2025-03-17T18:45:23.1408026Z ... [ 2025-03-17T18:45:23.1408159Z ... {"name": "Alice", "gender": "F"}, 2025-03-17T18:45:23.1408279Z ... {"name": "Bob", "gender": "M"}, 2025-03-17T18:45:23.1408415Z ... {"name": "Charlie", "gender": "M"}, 2025-03-17T18:45:23.1408507Z ... ], 2025-03-17T18:45:23.1408615Z ... ) # doctest:+SKIP 2025-03-17T18:45:23.1408737Z {'F': [{'gender': 'F', 'name': 'Alice'}], 2025-03-17T18:45:23.1408852Z 'M': [{'gender': 'M', 'name': 'Bob'}, 2025-03-17T18:45:23.1408981Z {'gender': 'M', 'name': 'Charlie'}]} 2025-03-17T18:45:23.1409063Z 2025-03-17T18:45:23.1409220Z Not to be confused with ``itertools.groupby`` 2025-03-17T18:45:23.1409303Z 2025-03-17T18:45:23.1409403Z See Also: 2025-03-17T18:45:23.1409494Z countby 2025-03-17T18:45:23.1409594Z 2025-03-17T18:45:23.1409850Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1409949Z 2025-03-17T18:45:23.1410054Z warnings.warn(msg) 2025-03-17T18:45:23.1410148Z 2025-03-17T18:45:23.1410336Z --- Parse Warning: 90 / 116 --- 2025-03-17T18:45:23.1411273Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SyncBatchNorm in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py line=601. 2025-03-17T18:45:23.1411542Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1411735Z Applies Batch Normalization over a N-Dimensional input. 2025-03-17T18:45:23.1411820Z 2025-03-17T18:45:23.1412213Z The N-D input is a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper 2025-03-17T18:45:23.1412447Z `Batch Normalization: Accelerating Deep Network Training by Reducing 2025-03-17T18:45:23.1412674Z Internal Covariate Shift `__ . 2025-03-17T18:45:23.1412761Z 2025-03-17T18:45:23.1412867Z .. math:: 2025-03-17T18:45:23.1412949Z 2025-03-17T18:45:23.1413179Z y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta 2025-03-17T18:45:23.1413265Z 2025-03-17T18:45:23.1413507Z The mean and standard-deviation are calculated per-dimension over all 2025-03-17T18:45:23.1413746Z mini-batches of the same process groups. :math:`\gamma` and :math:`\beta` 2025-03-17T18:45:23.1414002Z are learnable parameter vectors of size `C` (where `C` is the input size). 2025-03-17T18:45:23.1414185Z By default, the elements of :math:`\gamma` are sampled from 2025-03-17T18:45:23.1414395Z :math:`\mathcal{U}(0, 1)` and the elements of :math:`\beta` are set to 0. 2025-03-17T18:45:23.1414661Z The standard-deviation is calculated via the biased estimator, equivalent to 2025-03-17T18:45:23.1414790Z `torch.var(input, unbiased=False)`. 2025-03-17T18:45:23.1414878Z 2025-03-17T18:45:23.1415130Z Also by default, during training this layer keeps running estimates of its 2025-03-17T18:45:23.1415369Z computed mean and variance, which are then used for normalization during 2025-03-17T18:45:23.1415626Z evaluation. The running estimates are kept with a default :attr:`momentum` 2025-03-17T18:45:23.1415741Z of 0.1. 2025-03-17T18:45:23.1415859Z 2025-03-17T18:45:23.1416093Z If :attr:`track_running_stats` is set to ``False``, this layer then does not 2025-03-17T18:45:23.1416324Z keep running estimates, and batch statistics are instead used during 2025-03-17T18:45:23.1416435Z evaluation time as well. 2025-03-17T18:45:23.1416528Z 2025-03-17T18:45:23.1416617Z .. note:: 2025-03-17T18:45:23.1416863Z This :attr:`momentum` argument is different from one used in optimizer 2025-03-17T18:45:23.1417093Z classes and the conventional notion of momentum. Mathematically, the 2025-03-17T18:45:23.1417272Z update rule for running statistics here is 2025-03-17T18:45:23.1417547Z :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`, 2025-03-17T18:45:23.1417769Z where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the 2025-03-17T18:45:23.1417878Z new observed value. 2025-03-17T18:45:23.1417978Z 2025-03-17T18:45:23.1418289Z Because the Batch Normalization is done for each channel in the ``C`` dimension, computing 2025-03-17T18:45:23.1418557Z statistics on ``(N, +)`` slices, it's common terminology to call this Volumetric Batch 2025-03-17T18:45:23.1418746Z Normalization or Spatio-temporal Batch Normalization. 2025-03-17T18:45:23.1418845Z 2025-03-17T18:45:23.1418998Z Currently :class:`SyncBatchNorm` only supports 2025-03-17T18:45:23.1419303Z :class:`~torch.nn.DistributedDataParallel` (DDP) with single GPU per process. Use 2025-03-17T18:45:23.1419525Z :meth:`torch.nn.SyncBatchNorm.convert_sync_batchnorm()` to convert 2025-03-17T18:45:23.1419750Z :attr:`BatchNorm*D` layer to :class:`SyncBatchNorm` before wrapping 2025-03-17T18:45:23.1419855Z Network with DDP. 2025-03-17T18:45:23.1419953Z 2025-03-17T18:45:23.1420047Z Args: 2025-03-17T18:45:23.1420236Z num_features: :math:`C` from an expected input of size 2025-03-17T18:45:23.1420341Z :math:`(N, C, +)` 2025-03-17T18:45:23.1420546Z eps: a value added to the denominator for numerical stability. 2025-03-17T18:45:23.1420651Z Default: ``1e-5`` 2025-03-17T18:45:23.1420876Z momentum: the value used for the running_mean and running_var 2025-03-17T18:45:23.1421107Z computation. Can be set to ``None`` for cumulative moving average 2025-03-17T18:45:23.1421246Z (i.e. simple average). Default: 0.1 2025-03-17T18:45:23.1421455Z affine: a boolean value that when set to ``True``, this module has 2025-03-17T18:45:23.1421627Z learnable affine parameters. Default: ``True`` 2025-03-17T18:45:23.1421848Z track_running_stats: a boolean value that when set to ``True``, this 2025-03-17T18:45:23.1422096Z module tracks the running mean and variance, and when set to ``False``, 2025-03-17T18:45:23.1422332Z this module does not track such statistics, and initializes statistics 2025-03-17T18:45:23.1422550Z buffers :attr:`running_mean` and :attr:`running_var` as ``None``. 2025-03-17T18:45:23.1422784Z When these buffers are ``None``, this module always uses batch statistics. 2025-03-17T18:45:23.1422949Z in both training and eval modes. Default: ``True`` 2025-03-17T18:45:23.1423205Z process_group: synchronization of stats happen within each process group 2025-03-17T18:45:23.1423450Z individually. Default behavior is synchronization across the whole 2025-03-17T18:45:23.1423545Z world 2025-03-17T18:45:23.1423645Z 2025-03-17T18:45:23.1423736Z Shape: 2025-03-17T18:45:23.1423850Z - Input: :math:`(N, C, +)` 2025-03-17T18:45:23.1424009Z - Output: :math:`(N, C, +)` (same shape as input) 2025-03-17T18:45:23.1424125Z 2025-03-17T18:45:23.1424227Z .. note:: 2025-03-17T18:45:23.1424503Z Synchronization of batchnorm statistics occurs only while training, i.e. 2025-03-17T18:45:23.1424720Z synchronization is disabled when ``model.eval()`` is set or if 2025-03-17T18:45:23.1424854Z ``self.training`` is otherwise ``False``. 2025-03-17T18:45:23.1424948Z 2025-03-17T18:45:23.1425048Z Examples:: 2025-03-17T18:45:23.1425144Z 2025-03-17T18:45:23.1425250Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1425377Z >>> # With Learnable Parameters 2025-03-17T18:45:23.1425493Z >>> m = nn.SyncBatchNorm(100) 2025-03-17T18:45:23.1425659Z >>> # creating process group (optional) 2025-03-17T18:45:23.1425804Z >>> # ranks is a list of int identifying rank ids. 2025-03-17T18:45:23.1425924Z >>> ranks = list(range(8)) 2025-03-17T18:45:23.1426035Z >>> r1, r2 = ranks[:4], ranks[4:] 2025-03-17T18:45:23.1426198Z >>> # Note: every rank calls into new_group for every 2025-03-17T18:45:23.1426357Z >>> # process group created, even if that rank is not 2025-03-17T18:45:23.1426538Z >>> # part of the group. 2025-03-17T18:45:23.1426796Z >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] 2025-03-17T18:45:23.1427020Z >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] 2025-03-17T18:45:23.1427141Z >>> # Without Learnable Parameters 2025-03-17T18:45:23.1427361Z >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) 2025-03-17T18:45:23.1427490Z >>> input = torch.randn(20, 100, 35, 45, 10) 2025-03-17T18:45:23.1427608Z >>> output = m(input) 2025-03-17T18:45:23.1427692Z 2025-03-17T18:45:23.1427823Z >>> # network is nn.BatchNorm layer 2025-03-17T18:45:23.1428106Z >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) 2025-03-17T18:45:23.1428287Z >>> # only single gpu per process is currently supported 2025-03-17T18:45:23.1428507Z >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( 2025-03-17T18:45:23.1428637Z >>> sync_bn_network, 2025-03-17T18:45:23.1428799Z >>> device_ids=[args.local_rank], 2025-03-17T18:45:23.1428945Z >>> output_device=args.local_rank) 2025-03-17T18:45:23.1429033Z 2025-03-17T18:45:23.1429301Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1429388Z 2025-03-17T18:45:23.1429502Z warnings.warn(msg) 2025-03-17T18:45:23.1429585Z 2025-03-17T18:45:23.1429802Z --- Parse Warning: 91 / 116 --- 2025-03-17T18:45:23.1430841Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SyncBatchNorm.convert_sync_batchnorm in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/batchnorm.py line=825. 2025-03-17T18:45:23.1431122Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1431439Z Converts all :attr:`BatchNorm*D` layers in the model to :class:`torch.nn.SyncBatchNorm` layers. 2025-03-17T18:45:23.1431537Z 2025-03-17T18:45:23.1431628Z Args: 2025-03-17T18:45:23.1431894Z module (nn.Module): module containing one or more :attr:`BatchNorm*D` layers 2025-03-17T18:45:23.1432120Z process_group (optional): process group to scope synchronization, 2025-03-17T18:45:23.1432254Z default is the whole world 2025-03-17T18:45:23.1432341Z 2025-03-17T18:45:23.1432443Z Returns: 2025-03-17T18:45:23.1432702Z The original :attr:`module` with the converted :class:`torch.nn.SyncBatchNorm` 2025-03-17T18:45:23.1432930Z layers. If the original :attr:`module` is a :attr:`BatchNorm*D` layer, 2025-03-17T18:45:23.1433211Z a new :class:`torch.nn.SyncBatchNorm` layer object will be returned 2025-03-17T18:45:23.1433316Z instead. 2025-03-17T18:45:23.1433401Z 2025-03-17T18:45:23.1433509Z Example:: 2025-03-17T18:45:23.1433594Z 2025-03-17T18:45:23.1433734Z >>> # Network with nn.BatchNorm layer 2025-03-17T18:45:23.1433879Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.1434011Z >>> module = torch.nn.Sequential( 2025-03-17T18:45:23.1434134Z >>> torch.nn.Linear(20, 100), 2025-03-17T18:45:23.1434297Z >>> torch.nn.BatchNorm1d(100), 2025-03-17T18:45:23.1434397Z >>> ).cuda() 2025-03-17T18:45:23.1434531Z >>> # creating process group (optional) 2025-03-17T18:45:23.1434693Z >>> # ranks is a list of int identifying rank ids. 2025-03-17T18:45:23.1434807Z >>> ranks = list(range(8)) 2025-03-17T18:45:23.1434934Z >>> r1, r2 = ranks[:4], ranks[4:] 2025-03-17T18:45:23.1435089Z >>> # Note: every rank calls into new_group for every 2025-03-17T18:45:23.1435258Z >>> # process group created, even if that rank is not 2025-03-17T18:45:23.1435370Z >>> # part of the group. 2025-03-17T18:45:23.1435506Z >>> # xdoctest: +SKIP("distributed") 2025-03-17T18:45:23.1435764Z >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] 2025-03-17T18:45:23.1435986Z >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] 2025-03-17T18:45:23.1436291Z >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group) 2025-03-17T18:45:23.1436387Z 2025-03-17T18:45:23.1436477Z 2025-03-17T18:45:23.1436941Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1437031Z 2025-03-17T18:45:23.1437149Z warnings.warn(msg) 2025-03-17T18:45:23.1437240Z 2025-03-17T18:45:23.1437455Z --- Parse Warning: 92 / 116 --- 2025-03-17T18:45:23.1438407Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=Unflatten in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/flatten.py line=60. 2025-03-17T18:45:23.1438692Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1438776Z 2025-03-17T18:45:23.1439109Z Unflattens a tensor dim expanding it to a desired shape. For use with :class:`~nn.Sequential`. 2025-03-17T18:45:23.1439197Z 2025-03-17T18:45:23.1439486Z * :attr:`dim` specifies the dimension of the input tensor to be unflattened, and it can 2025-03-17T18:45:23.1439716Z be either `int` or `str` when `Tensor` or `NamedTensor` is used, respectively. 2025-03-17T18:45:23.1439816Z 2025-03-17T18:45:23.1440143Z * :attr:`unflattened_size` is the new shape of the unflattened dimension of the tensor and it can be 2025-03-17T18:45:23.1440411Z a `tuple` of ints or a `list` of ints or `torch.Size` for `Tensor` input; a `NamedShape` 2025-03-17T18:45:23.1440583Z (tuple of `(name, size)` tuples) for `NamedTensor` input. 2025-03-17T18:45:23.1440681Z 2025-03-17T18:45:23.1440771Z Shape: 2025-03-17T18:45:23.1441003Z - Input: :math:`(*, S_{\text{dim}}, *)`, where :math:`S_{\text{dim}}` is the size at 2025-03-17T18:45:23.1441258Z dimension :attr:`dim` and :math:`*` means any number of dimensions including none. 2025-03-17T18:45:23.1441498Z - Output: :math:`(*, U_1, ..., U_n, *)`, where :math:`U` = :attr:`unflattened_size` and 2025-03-17T18:45:23.1441630Z :math:`\prod_{i=1}^n U_i = S_{\text{dim}}`. 2025-03-17T18:45:23.1441729Z 2025-03-17T18:45:23.1441820Z Args: 2025-03-17T18:45:23.1442028Z dim (Union[int, str]): Dimension to be unflattened 2025-03-17T18:45:23.1442414Z unflattened_size (Union[torch.Size, Tuple, List, NamedShape]): New shape of the unflattened dimension 2025-03-17T18:45:23.1442513Z 2025-03-17T18:45:23.1442611Z Examples: 2025-03-17T18:45:23.1442736Z >>> input = torch.randn(2, 50) 2025-03-17T18:45:23.1442842Z >>> # With tuple of ints 2025-03-17T18:45:23.1442961Z >>> m = nn.Sequential( 2025-03-17T18:45:23.1443067Z >>> nn.Linear(50, 50), 2025-03-17T18:45:23.1443194Z >>> nn.Unflatten(1, (2, 5, 5)) 2025-03-17T18:45:23.1443280Z >>> ) 2025-03-17T18:45:23.1443393Z >>> output = m(input) 2025-03-17T18:45:23.1443527Z >>> output.size() 2025-03-17T18:45:23.1443644Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:23.1443745Z >>> # With torch.Size 2025-03-17T18:45:23.1443863Z >>> m = nn.Sequential( 2025-03-17T18:45:23.1443966Z >>> nn.Linear(50, 50), 2025-03-17T18:45:23.1444116Z >>> nn.Unflatten(1, torch.Size([2, 5, 5])) 2025-03-17T18:45:23.1444203Z >>> ) 2025-03-17T18:45:23.1444309Z >>> output = m(input) 2025-03-17T18:45:23.1444418Z >>> output.size() 2025-03-17T18:45:23.1444523Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:23.1444658Z >>> # With namedshape (tuple of tuples) 2025-03-17T18:45:23.1444817Z >>> input = torch.randn(2, 50, names=('N', 'features')) 2025-03-17T18:45:23.1445034Z >>> unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5))) 2025-03-17T18:45:23.1445150Z >>> output = unflatten(input) 2025-03-17T18:45:23.1445264Z >>> output.size() 2025-03-17T18:45:23.1445371Z torch.Size([2, 2, 5, 5]) 2025-03-17T18:45:23.1445467Z 2025-03-17T18:45:23.1445727Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1445823Z 2025-03-17T18:45:23.1445928Z warnings.warn(msg) 2025-03-17T18:45:23.1446024Z 2025-03-17T18:45:23.1446222Z --- Parse Warning: 93 / 116 --- 2025-03-17T18:45:23.1447246Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=TripletMarginWithDistanceLoss in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/loss.py line=1700. 2025-03-17T18:45:23.1447541Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1447762Z Creates a criterion that measures the triplet loss given input 2025-03-17T18:45:23.1447962Z tensors :math:`a`, :math:`p`, and :math:`n` (representing anchor, 2025-03-17T18:45:23.1448190Z positive, and negative examples, respectively), and a nonnegative, 2025-03-17T18:45:23.1448450Z real-valued function ("distance function") used to compute the relationship 2025-03-17T18:45:23.1448694Z between the anchor and positive example ("positive distance") and the 2025-03-17T18:45:23.1448853Z anchor and negative example ("negative distance"). 2025-03-17T18:45:23.1448955Z 2025-03-17T18:45:23.1449171Z The unreduced loss (i.e., with :attr:`reduction` set to ``'none'``) 2025-03-17T18:45:23.1449290Z can be described as: 2025-03-17T18:45:23.1449376Z 2025-03-17T18:45:23.1449480Z .. math:: 2025-03-17T18:45:23.1449627Z \ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad 2025-03-17T18:45:23.1449798Z l_i = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\} 2025-03-17T18:45:23.1449886Z 2025-03-17T18:45:23.1450154Z where :math:`N` is the batch size; :math:`d` is a nonnegative, real-valued function 2025-03-17T18:45:23.1450453Z quantifying the closeness of two tensors, referred to as the :attr:`distance_function`; 2025-03-17T18:45:23.1450716Z and :math:`margin` is a nonnegative margin representing the minimum difference 2025-03-17T18:45:23.1450973Z between the positive and negative distances that is required for the loss to 2025-03-17T18:45:23.1451246Z be 0. The input tensors have :math:`N` elements each and can be of any shape 2025-03-17T18:45:23.1451408Z that the distance function can handle. 2025-03-17T18:45:23.1451509Z 2025-03-17T18:45:23.1451631Z If :attr:`reduction` is not ``'none'`` 2025-03-17T18:45:23.1451751Z (default ``'mean'``), then: 2025-03-17T18:45:23.1451838Z 2025-03-17T18:45:23.1451943Z .. math:: 2025-03-17T18:45:23.1452039Z \ell(x, y) = 2025-03-17T18:45:23.1452149Z \begin{cases} 2025-03-17T18:45:23.1452355Z \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ 2025-03-17T18:45:23.1452562Z \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} 2025-03-17T18:45:23.1452684Z \end{cases} 2025-03-17T18:45:23.1452777Z 2025-03-17T18:45:23.1453016Z See also :class:`~torch.nn.TripletMarginLoss`, which computes the triplet 2025-03-17T18:45:23.1453289Z loss for input tensors using the :math:`l_p` distance as the distance function. 2025-03-17T18:45:23.1453375Z 2025-03-17T18:45:23.1453475Z Args: 2025-03-17T18:45:23.1453756Z distance_function (Callable, optional): A nonnegative, real-valued function that 2025-03-17T18:45:23.1453962Z quantifies the closeness of two tensors. If not specified, 2025-03-17T18:45:23.1454138Z `nn.PairwiseDistance` will be used. Default: ``None`` 2025-03-17T18:45:23.1454424Z margin (float, optional): A nonnegative margin representing the minimum difference 2025-03-17T18:45:23.1454700Z between the positive and negative distances required for the loss to be 0. Larger 2025-03-17T18:45:23.1454999Z margins penalize cases where the negative examples are not distant enough from the 2025-03-17T18:45:23.1455175Z anchors, relative to the positives. Default: :math:`1`. 2025-03-17T18:45:23.1455442Z swap (bool, optional): Whether to use the distance swap described in the paper 2025-03-17T18:45:23.1455715Z `Learning shallow convolutional feature descriptors with triplet losses` by 2025-03-17T18:45:23.1455971Z V. Balntas, E. Riba et al. If True, and if the positive example is closer to the 2025-03-17T18:45:23.1456245Z negative example than the anchor is, swaps the positive example and the anchor in 2025-03-17T18:45:23.1456423Z the loss computation. Default: ``False``. 2025-03-17T18:45:23.1456709Z reduction (str, optional): Specifies the (optional) reduction to apply to the output: 2025-03-17T18:45:23.1456910Z ``'none'`` | ``'mean'`` | ``'sum'``. ``'none'``: no reduction will be applied, 2025-03-17T18:45:23.1457100Z ``'mean'``: the sum of the output will be divided by the number of 2025-03-17T18:45:23.1457354Z elements in the output, ``'sum'``: the output will be summed. Default: ``'mean'`` 2025-03-17T18:45:23.1457440Z 2025-03-17T18:45:23.1457540Z 2025-03-17T18:45:23.1457630Z Shape: 2025-03-17T18:45:23.1457894Z - Input: :math:`(N, *)` where :math:`*` represents any number of additional dimensions 2025-03-17T18:45:23.1458031Z as supported by the distance function. 2025-03-17T18:45:23.1458300Z - Output: A Tensor of shape :math:`(N)` if :attr:`reduction` is ``'none'``, or a scalar 2025-03-17T18:45:23.1458397Z otherwise. 2025-03-17T18:45:23.1458497Z 2025-03-17T18:45:23.1458596Z Examples:: 2025-03-17T18:45:23.1458692Z 2025-03-17T18:45:23.1458800Z >>> # Initialize embeddings 2025-03-17T18:45:23.1458939Z >>> embedding = nn.Embedding(1000, 128) 2025-03-17T18:45:23.1459073Z >>> anchor_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:23.1459209Z >>> positive_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:23.1459354Z >>> negative_ids = torch.randint(0, 1000, (1,)) 2025-03-17T18:45:23.1459471Z >>> anchor = embedding(anchor_ids) 2025-03-17T18:45:23.1459633Z >>> positive = embedding(positive_ids) 2025-03-17T18:45:23.1459778Z >>> negative = embedding(negative_ids) 2025-03-17T18:45:23.1459878Z >>> 2025-03-17T18:45:23.1459992Z >>> # Built-in Distance Function 2025-03-17T18:45:23.1460105Z >>> triplet_loss = \ 2025-03-17T18:45:23.1460394Z >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance()) 2025-03-17T18:45:23.1460564Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:23.1460668Z >>> output.backward() 2025-03-17T18:45:23.1460764Z >>> 2025-03-17T18:45:23.1460876Z >>> # Custom Distance Function 2025-03-17T18:45:23.1461018Z >>> def l_infinity(x1, x2): 2025-03-17T18:45:23.1461180Z >>> return torch.max(torch.abs(x1 - x2), dim=1).values 2025-03-17T18:45:23.1461277Z >>> 2025-03-17T18:45:23.1461471Z >>> # xdoctest: +SKIP("FIXME: Would call backwards a second time") 2025-03-17T18:45:23.1461585Z >>> triplet_loss = ( 2025-03-17T18:45:23.1461872Z >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5)) 2025-03-17T18:45:23.1462041Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:23.1462144Z >>> output.backward() 2025-03-17T18:45:23.1462244Z >>> 2025-03-17T18:45:23.1462368Z >>> # Custom Distance Function (Lambda) 2025-03-17T18:45:23.1462481Z >>> triplet_loss = ( 2025-03-17T18:45:23.1462617Z >>> nn.TripletMarginWithDistanceLoss( 2025-03-17T18:45:23.1462851Z >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y))) 2025-03-17T18:45:23.1463010Z >>> output = triplet_loss(anchor, positive, negative) 2025-03-17T18:45:23.1463126Z >>> output.backward() 2025-03-17T18:45:23.1463212Z 2025-03-17T18:45:23.1463316Z Reference: 2025-03-17T18:45:23.1463626Z V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: 2025-03-17T18:45:23.1463869Z https://bmva-archive.org.uk/bmvc/2016/papers/paper119/index.html 2025-03-17T18:45:23.1463954Z 2025-03-17T18:45:23.1464230Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 17)) 2025-03-17T18:45:23.1464314Z 2025-03-17T18:45:23.1464424Z warnings.warn(msg) 2025-03-17T18:45:23.1464532Z 2025-03-17T18:45:23.1464746Z --- Parse Warning: 94 / 116 --- 2025-03-17T18:45:23.1465662Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=MaxUnpool2d in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/pooling.py line=395. 2025-03-17T18:45:23.1465949Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1466108Z Computes a partial inverse of :class:`MaxPool2d`. 2025-03-17T18:45:23.1466211Z 2025-03-17T18:45:23.1466564Z :class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost. 2025-03-17T18:45:23.1466670Z 2025-03-17T18:45:23.1466906Z :class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d` 2025-03-17T18:45:23.1467170Z including the indices of the maximal values and computes a partial inverse 2025-03-17T18:45:23.1467322Z in which all non-maximal values are set to zero. 2025-03-17T18:45:23.1467420Z 2025-03-17T18:45:23.1467508Z Note: 2025-03-17T18:45:23.1467847Z This operation may behave nondeterministically when the input indices has repeat values. 2025-03-17T18:45:23.1468234Z See https://github.com/pytorch/pytorch/issues/80827 and :doc:`/notes/randomness` for more information. 2025-03-17T18:45:23.1468335Z 2025-03-17T18:45:23.1468566Z .. note:: :class:`MaxPool2d` can map several input sizes to the same output 2025-03-17T18:45:23.1468753Z sizes. Hence, the inversion process can get ambiguous. 2025-03-17T18:45:23.1468993Z To accommodate this, you can provide the needed output size 2025-03-17T18:45:23.1469245Z as an additional argument :attr:`output_size` in the forward call. 2025-03-17T18:45:23.1469372Z See the Inputs and Example below. 2025-03-17T18:45:23.1469471Z 2025-03-17T18:45:23.1469557Z Args: 2025-03-17T18:45:23.1469762Z kernel_size (int or tuple): Size of the max pooling window. 2025-03-17T18:45:23.1469938Z stride (int or tuple): Stride of the max pooling window. 2025-03-17T18:45:23.1470079Z It is set to :attr:`kernel_size` by default. 2025-03-17T18:45:23.1470307Z padding (int or tuple): Padding that was added to the input 2025-03-17T18:45:23.1470392Z 2025-03-17T18:45:23.1470494Z Inputs: 2025-03-17T18:45:23.1470623Z - `input`: the input Tensor to invert 2025-03-17T18:45:23.1470843Z - `indices`: the indices given out by :class:`~torch.nn.MaxPool2d` 2025-03-17T18:45:23.1471007Z - `output_size` (optional): the targeted output size 2025-03-17T18:45:23.1471106Z 2025-03-17T18:45:23.1471196Z Shape: 2025-03-17T18:45:23.1471399Z - Input: :math:`(N, C, H_{in}, W_{in})` or :math:`(C, H_{in}, W_{in})`. 2025-03-17T18:45:23.1471617Z - Output: :math:`(N, C, H_{out}, W_{out})` or :math:`(C, H_{out}, W_{out})`, where 2025-03-17T18:45:23.1471718Z 2025-03-17T18:45:23.1471811Z .. math:: 2025-03-17T18:45:23.1472092Z H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]} 2025-03-17T18:45:23.1472179Z 2025-03-17T18:45:23.1472286Z .. math:: 2025-03-17T18:45:23.1472545Z W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]} 2025-03-17T18:45:23.1472643Z 2025-03-17T18:45:23.1472812Z or as given by :attr:`output_size` in the call operator 2025-03-17T18:45:23.1472914Z 2025-03-17T18:45:23.1473006Z Example:: 2025-03-17T18:45:23.1473104Z 2025-03-17T18:45:23.1473272Z >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) 2025-03-17T18:45:23.1473417Z >>> unpool = nn.MaxUnpool2d(2, stride=2) 2025-03-17T18:45:23.1473552Z >>> input = torch.tensor([[[[ 1., 2., 3., 4.], 2025-03-17T18:45:23.1473706Z [ 5., 6., 7., 8.], 2025-03-17T18:45:23.1473819Z [ 9., 10., 11., 12.], 2025-03-17T18:45:23.1473944Z [13., 14., 15., 16.]]]]) 2025-03-17T18:45:23.1474064Z >>> output, indices = pool(input) 2025-03-17T18:45:23.1474189Z >>> unpool(output, indices) 2025-03-17T18:45:23.1474297Z tensor([[[[ 0., 0., 0., 0.], 2025-03-17T18:45:23.1474415Z [ 0., 6., 0., 8.], 2025-03-17T18:45:23.1474514Z [ 0., 0., 0., 0.], 2025-03-17T18:45:23.1474633Z [ 0., 14., 0., 16.]]]]) 2025-03-17T18:45:23.1474853Z >>> # Now using output_size to resolve an ambiguous size for the inverse 2025-03-17T18:45:23.1475008Z >>> input = torch.tensor([[[[ 1., 2., 3., 4., 5.], 2025-03-17T18:45:23.1475121Z [ 6., 7., 8., 9., 10.], 2025-03-17T18:45:23.1475245Z [11., 12., 13., 14., 15.], 2025-03-17T18:45:23.1475361Z [16., 17., 18., 19., 20.]]]]) 2025-03-17T18:45:23.1475493Z >>> output, indices = pool(input) 2025-03-17T18:45:23.1475665Z >>> # This call will not work without specifying output_size 2025-03-17T18:45:23.1475842Z >>> unpool(output, indices, output_size=input.size()) 2025-03-17T18:45:23.1475953Z tensor([[[[ 0., 0., 0., 0., 0.], 2025-03-17T18:45:23.1476071Z [ 0., 7., 0., 9., 0.], 2025-03-17T18:45:23.1476201Z [ 0., 0., 0., 0., 0.], 2025-03-17T18:45:23.1476342Z [ 0., 17., 0., 19., 0.]]]]) 2025-03-17T18:45:23.1476428Z 2025-03-17T18:45:23.1476523Z 2025-03-17T18:45:23.1476609Z 2025-03-17T18:45:23.1476882Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1476965Z 2025-03-17T18:45:23.1477072Z warnings.warn(msg) 2025-03-17T18:45:23.1477166Z 2025-03-17T18:45:23.1477371Z --- Parse Warning: 95 / 116 --- 2025-03-17T18:45:23.1478292Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=EmbeddingBag in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/sparse.py line=270. 2025-03-17T18:45:23.1478600Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1478925Z Compute sums or means of 'bags' of embeddings, without instantiating the intermediate embeddings. 2025-03-17T18:45:23.1479025Z 2025-03-17T18:45:23.1479353Z For bags of constant length, no :attr:`per_sample_weights`, no indices equal to :attr:`padding_idx`, 2025-03-17T18:45:23.1479480Z and with 2D inputs, this class 2025-03-17T18:45:23.1479564Z 2025-03-17T18:45:23.1479881Z * with ``mode="sum"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=1)``, 2025-03-17T18:45:23.1480215Z * with ``mode="mean"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=1)``, 2025-03-17T18:45:23.1480531Z * with ``mode="max"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=1)``. 2025-03-17T18:45:23.1480618Z 2025-03-17T18:45:23.1480994Z However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these 2025-03-17T18:45:23.1481090Z operations. 2025-03-17T18:45:23.1481176Z 2025-03-17T18:45:23.1481453Z EmbeddingBag also supports per-sample weights as an argument to the forward 2025-03-17T18:45:23.1481700Z pass. This scales the output of the Embedding before performing a weighted 2025-03-17T18:45:23.1481968Z reduction as specified by ``mode``. If :attr:`per_sample_weights` is passed, the 2025-03-17T18:45:23.1482246Z only supported ``mode`` is ``"sum"``, which computes a weighted sum according to 2025-03-17T18:45:23.1482363Z :attr:`per_sample_weights`. 2025-03-17T18:45:23.1482450Z 2025-03-17T18:45:23.1482548Z Args: 2025-03-17T18:45:23.1482736Z num_embeddings (int): size of the dictionary of embeddings 2025-03-17T18:45:23.1482921Z embedding_dim (int): the size of each embedding vector 2025-03-17T18:45:23.1483238Z max_norm (float, optional): If given, each embedding vector with norm larger than :attr:`max_norm` 2025-03-17T18:45:23.1483401Z is renormalized to have norm :attr:`max_norm`. 2025-03-17T18:45:23.1483750Z norm_type (float, optional): The p of the p-norm to compute for the :attr:`max_norm` option. Default ``2``. 2025-03-17T18:45:23.1484096Z scale_grad_by_freq (bool, optional): if given, this will scale gradients by the inverse of frequency of 2025-03-17T18:45:23.1484273Z the words in the mini-batch. Default ``False``. 2025-03-17T18:45:23.1484454Z Note: this option is not supported when ``mode="max"``. 2025-03-17T18:45:23.1484722Z mode (str, optional): ``"sum"``, ``"mean"`` or ``"max"``. Specifies the way to reduce the bag. 2025-03-17T18:45:23.1484949Z ``"sum"`` computes the weighted sum, taking :attr:`per_sample_weights` 2025-03-17T18:45:23.1485193Z into consideration. ``"mean"`` computes the average of the values 2025-03-17T18:45:23.1485368Z in the bag, ``"max"`` computes the max value over each bag. 2025-03-17T18:45:23.1485550Z Default: ``"mean"`` 2025-03-17T18:45:23.1485876Z sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See 2025-03-17T18:45:23.1486150Z Notes for more details regarding sparse gradients. Note: this option is not 2025-03-17T18:45:23.1486280Z supported when ``mode="max"``. 2025-03-17T18:45:23.1486672Z include_last_offset (bool, optional): if ``True``, :attr:`offsets` has one additional element, where the last element 2025-03-17T18:45:23.1486921Z is equivalent to the size of `indices`. This matches the CSR format. 2025-03-17T18:45:23.1487271Z padding_idx (int, optional): If specified, the entries at :attr:`padding_idx` do not contribute to the 2025-03-17T18:45:23.1487552Z gradient; therefore, the embedding vector at :attr:`padding_idx` is not updated 2025-03-17T18:45:23.1487812Z during training, i.e. it remains as a fixed "pad". For a newly constructed 2025-03-17T18:45:23.1488085Z EmbeddingBag, the embedding vector at :attr:`padding_idx` will default to all 2025-03-17T18:45:23.1488342Z zeros, but can be updated to another value to be used as the padding vector. 2025-03-17T18:45:23.1488590Z Note that the embedding vector at :attr:`padding_idx` is excluded from the 2025-03-17T18:45:23.1488714Z reduction. 2025-03-17T18:45:23.1488796Z 2025-03-17T18:45:23.1488898Z Attributes: 2025-03-17T18:45:23.1489218Z weight (Tensor): the learnable weights of the module of shape `(num_embeddings, embedding_dim)` 2025-03-17T18:45:23.1489375Z initialized from :math:`\mathcal{N}(0, 1)`. 2025-03-17T18:45:23.1489458Z 2025-03-17T18:45:23.1489564Z Examples:: 2025-03-17T18:45:23.1489646Z 2025-03-17T18:45:23.1489828Z >>> # an EmbeddingBag module containing 10 tensors of size 3 2025-03-17T18:45:23.1489991Z >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') 2025-03-17T18:45:23.1490159Z >>> # a batch of 2 samples of 4 indices each 2025-03-17T18:45:23.1490352Z >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long) 2025-03-17T18:45:23.1490515Z >>> offsets = torch.tensor([0, 4], dtype=torch.long) 2025-03-17T18:45:23.1490661Z >>> # xdoctest: +IGNORE_WANT("non-deterministic") 2025-03-17T18:45:23.1490788Z >>> embedding_sum(input, offsets) 2025-03-17T18:45:23.1490903Z tensor([[-0.8861, -5.4350, -0.0523], 2025-03-17T18:45:23.1491019Z [ 1.1306, -2.5798, -1.0044]]) 2025-03-17T18:45:23.1491107Z 2025-03-17T18:45:23.1491230Z >>> # Example with padding_idx 2025-03-17T18:45:23.1491441Z >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2) 2025-03-17T18:45:23.1491641Z >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long) 2025-03-17T18:45:23.1491796Z >>> offsets = torch.tensor([0, 4], dtype=torch.long) 2025-03-17T18:45:23.1491921Z >>> embedding_sum(input, offsets) 2025-03-17T18:45:23.1492030Z tensor([[ 0.0000, 0.0000, 0.0000], 2025-03-17T18:45:23.1492143Z [-0.7082, 3.2145, -2.6251]]) 2025-03-17T18:45:23.1492234Z 2025-03-17T18:45:23.1492421Z >>> # An EmbeddingBag can be loaded from an Embedding like so 2025-03-17T18:45:23.1492571Z >>> embedding = nn.Embedding(10, 3, padding_idx=2) 2025-03-17T18:45:23.1492740Z >>> embedding_sum = nn.EmbeddingBag.from_pretrained( 2025-03-17T18:45:23.1492848Z embedding.weight, 2025-03-17T18:45:23.1493016Z padding_idx=embedding.padding_idx, 2025-03-17T18:45:23.1493141Z mode='sum') 2025-03-17T18:45:23.1493237Z 2025-03-17T18:45:23.1493495Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1493588Z 2025-03-17T18:45:23.1493692Z warnings.warn(msg) 2025-03-17T18:45:23.1493777Z 2025-03-17T18:45:23.1493987Z --- Parse Warning: 96 / 116 --- 2025-03-17T18:45:23.1495019Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DistributedDataParallel.join in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py line=1742. 2025-03-17T18:45:23.1495328Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1495425Z 2025-03-17T18:45:23.1495668Z Context manager for training with uneven inputs across processes in DDP. 2025-03-17T18:45:23.1495756Z 2025-03-17T18:45:23.1496003Z This context manager will keep track of already-joined DDP processes, 2025-03-17T18:45:23.1496213Z and "shadow" the forward and backward passes by inserting collective 2025-03-17T18:45:23.1496461Z communication operations to match with the ones created by non-joined 2025-03-17T18:45:23.1496696Z DDP processes. This will ensure each collective call has a corresponding 2025-03-17T18:45:23.1496935Z call by already-joined DDP processes, preventing hangs or errors that 2025-03-17T18:45:23.1497140Z would otherwise happen when training with uneven inputs across 2025-03-17T18:45:23.1497392Z processes. Alternatively, if the flag ``throw_on_early_termination`` is 2025-03-17T18:45:23.1497605Z specified to be ``True``, all trainers will throw an error once one rank 2025-03-17T18:45:23.1497818Z runs out of inputs, allowing these errors to be caught and handled 2025-03-17T18:45:23.1497934Z according to application logic. 2025-03-17T18:45:23.1498032Z 2025-03-17T18:45:23.1498256Z Once all DDP processes have joined, the context manager will broadcast 2025-03-17T18:45:23.1498497Z the model corresponding to the last joined process to all processes to 2025-03-17T18:45:23.1498652Z ensure the model is the same across all processes 2025-03-17T18:45:23.1498771Z (which is guaranteed by DDP). 2025-03-17T18:45:23.1498895Z 2025-03-17T18:45:23.1499112Z To use this to enable training with uneven inputs across processes, 2025-03-17T18:45:23.1499338Z simply wrap this context manager around your training loop. No further 2025-03-17T18:45:23.1499528Z modifications to the model or data loading is required. 2025-03-17T18:45:23.1499616Z 2025-03-17T18:45:23.1499718Z .. warning:: 2025-03-17T18:45:23.1499928Z If the model or training loop this context manager is wrapped around 2025-03-17T18:45:23.1500129Z has additional distributed collective operations, such as 2025-03-17T18:45:23.1500322Z ``SyncBatchNorm`` in the model's forward pass, then the flag 2025-03-17T18:45:23.1500546Z ``throw_on_early_termination`` must be enabled. This is because this 2025-03-17T18:45:23.1500762Z context manager is not aware of non-DDP collective communication. 2025-03-17T18:45:23.1500950Z This flag will cause all ranks to throw when any one rank 2025-03-17T18:45:23.1501167Z exhausts inputs, allowing these errors to be caught and recovered 2025-03-17T18:45:23.1501284Z from across all ranks. 2025-03-17T18:45:23.1501369Z 2025-03-17T18:45:23.1501467Z Args: 2025-03-17T18:45:23.1501653Z divide_by_initial_world_size (bool): If ``True``, will divide 2025-03-17T18:45:23.1501874Z gradients by the initial ``world_size`` DDP training was launched 2025-03-17T18:45:23.1502043Z with. If ``False``, will compute the effective world size 2025-03-17T18:45:23.1502247Z (number of ranks that have not depleted their inputs yet) and 2025-03-17T18:45:23.1502460Z divide gradients by that during allreduce. Set 2025-03-17T18:45:23.1503031Z ``divide_by_initial_world_size=True`` to ensure every input 2025-03-17T18:45:23.1503250Z sample including the uneven inputs have equal weight in terms of 2025-03-17T18:45:23.1503435Z how much they contribute to the global gradient. This is 2025-03-17T18:45:23.1503613Z achieved by always dividing the gradient by the initial 2025-03-17T18:45:23.1503817Z ``world_size`` even when we encounter uneven inputs. If you set 2025-03-17T18:45:23.1503991Z this to ``False``, we divide the gradient by the remaining 2025-03-17T18:45:23.1504225Z number of nodes. This ensures parity with training on a smaller 2025-03-17T18:45:23.1504414Z ``world_size`` although it also means the uneven inputs would 2025-03-17T18:45:23.1504618Z contribute more towards the global gradient. Typically, you 2025-03-17T18:45:23.1504813Z would want to set this to ``True`` for cases where the last few 2025-03-17T18:45:23.1505027Z inputs of your training job are uneven. In extreme cases, where 2025-03-17T18:45:23.1505217Z there is a large discrepancy in the number of inputs, setting 2025-03-17T18:45:23.1505373Z this to ``False`` might provide better results. 2025-03-17T18:45:23.1505590Z enable (bool): Whether to enable uneven input detection or not. Pass 2025-03-17T18:45:23.1505775Z in ``enable=False`` to disable in cases where you know that 2025-03-17T18:45:23.1505969Z inputs are even across participating processes. Default is 2025-03-17T18:45:23.1506073Z ``True``. 2025-03-17T18:45:23.1506260Z throw_on_early_termination (bool): Whether to throw an error 2025-03-17T18:45:23.1506538Z or continue training when at least one rank has exhausted 2025-03-17T18:45:23.1506732Z inputs. If ``True``, will throw upon the first rank reaching end 2025-03-17T18:45:23.1506917Z of data. If ``False``, will continue training with a smaller 2025-03-17T18:45:23.1507120Z effective world size until all ranks are joined. Note that if 2025-03-17T18:45:23.1507254Z this flag is specified, then the flag 2025-03-17T18:45:23.1507432Z ``divide_by_initial_world_size`` would be ignored. Default 2025-03-17T18:45:23.1507572Z is ``False``. 2025-03-17T18:45:23.1507660Z 2025-03-17T18:45:23.1511912Z 2025-03-17T18:45:23.1512038Z Example:: 2025-03-17T18:45:23.1512137Z 2025-03-17T18:45:23.1512265Z >>> # xdoctest: +SKIP("Distributed") 2025-03-17T18:45:23.1512384Z >>> import torch 2025-03-17T18:45:23.1512507Z >>> import torch.distributed as dist 2025-03-17T18:45:23.1512610Z >>> import os 2025-03-17T18:45:23.1512741Z >>> import torch.multiprocessing as mp 2025-03-17T18:45:23.1512851Z >>> import torch.nn as nn 2025-03-17T18:45:23.1512974Z >>> # On each spawned worker 2025-03-17T18:45:23.1513082Z >>> def worker(rank): 2025-03-17T18:45:23.1513283Z >>> dist.init_process_group("nccl", rank=rank, world_size=2) 2025-03-17T18:45:23.1513404Z >>> torch.cuda.set_device(rank) 2025-03-17T18:45:23.1513553Z >>> model = nn.Linear(1, 1, bias=False).to(rank) 2025-03-17T18:45:23.1513739Z >>> model = torch.nn.parallel.DistributedDataParallel( 2025-03-17T18:45:23.1513894Z >>> model, device_ids=[rank], output_device=rank 2025-03-17T18:45:23.1513985Z >>> ) 2025-03-17T18:45:23.1514126Z >>> # Rank 1 gets one more input than rank 0. 2025-03-17T18:45:23.1514325Z >>> inputs = [torch.tensor([1]).float() for _ in range(10 + rank)] 2025-03-17T18:45:23.1514435Z >>> with model.join(): 2025-03-17T18:45:23.1514532Z >>> for _ in range(5): 2025-03-17T18:45:23.1514654Z >>> for inp in inputs: 2025-03-17T18:45:23.1514842Z >>> loss = model(inp).sum() 2025-03-17T18:45:23.1514994Z >>> loss.backward() 2025-03-17T18:45:23.1515196Z >>> # Without the join() API, the below synchronization will hang 2025-03-17T18:45:23.1515352Z >>> # blocking for rank 1's allreduce to complete. 2025-03-17T18:45:23.1515487Z >>> torch.cuda.synchronize(device=rank) 2025-03-17T18:45:23.1515581Z 2025-03-17T18:45:23.1515846Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1515941Z 2025-03-17T18:45:23.1516042Z warnings.warn(msg) 2025-03-17T18:45:23.1516135Z 2025-03-17T18:45:23.1516407Z --- Parse Warning: 97 / 116 --- 2025-03-17T18:45:23.1517528Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DistributedDataParallel._register_fused_optim in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/parallel/distributed.py line=2033. 2025-03-17T18:45:23.1517802Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1517897Z 2025-03-17T18:45:23.1518210Z Register an optimizer in DDP to optimize parameter immediately after its gradient reduction. 2025-03-17T18:45:23.1518305Z 2025-03-17T18:45:23.1518519Z Registers an optimizer with DDP such that the optimization for a 2025-03-17T18:45:23.1518742Z parameter will run immediately when that parameter's gradient is 2025-03-17T18:45:23.1518951Z finished with reduction, instead of waiting for all parameters' 2025-03-17T18:45:23.1519182Z gradients to finish reduction. This can result in a training speedup 2025-03-17T18:45:23.1519410Z depending on your workload since the optimizer can run while gradient 2025-03-17T18:45:23.1519654Z reduction for other parameters are still ongoing. In addition, this has 2025-03-17T18:45:23.1519886Z the potential to reduce peak memory consumption during training, as it 2025-03-17T18:45:23.1520103Z only needs to load the per-parameter optimizer states of a single 2025-03-17T18:45:23.1520324Z parameter at a time, instead of loading all per-parameter optimizer 2025-03-17T18:45:23.1520426Z states at once. 2025-03-17T18:45:23.1520512Z 2025-03-17T18:45:23.1520611Z Args: 2025-03-17T18:45:23.1520841Z optim (Type): a ``torch.optim.Optimizer`` class to be registered 2025-03-17T18:45:23.1520957Z as a fused optimizer. 2025-03-17T18:45:23.1521129Z *args (Sequence[Any]): Arguments to forward to `optim`. 2025-03-17T18:45:23.1521358Z optim_params (Optional[Iterable[torch.Tensor]]): Set of parameters 2025-03-17T18:45:23.1521590Z to optimize, similar to `params` argument of traditional `torch.optim` 2025-03-17T18:45:23.1521806Z Optimizers. If this is omitted, all DDP model parameters will be 2025-03-17T18:45:23.1521899Z optimized. 2025-03-17T18:45:23.1522117Z **kwargs: (Dict[str, Any]): Keyword arguments to forward to `optim`. 2025-03-17T18:45:23.1522208Z 2025-03-17T18:45:23.1522313Z .. warning :: 2025-03-17T18:45:23.1522533Z _register_fused_optim should only be called once on a DDP instance, 2025-03-17T18:45:23.1522759Z and registering multiple fused optimizers for the same DDP model 2025-03-17T18:45:23.1522900Z is not currently supported. Please ping 2025-03-17T18:45:23.1523155Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:23.1523257Z for your use case. 2025-03-17T18:45:23.1523354Z 2025-03-17T18:45:23.1523447Z .. warning :: 2025-03-17T18:45:23.1523655Z _register_fused_optim and register_comm_hook currently do not 2025-03-17T18:45:23.1523875Z compose together, meaning that custom DDP communication hooks are 2025-03-17T18:45:23.1524064Z not supported with overlapped optimizers. Please ping 2025-03-17T18:45:23.1524299Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:23.1524458Z for your use case. 2025-03-17T18:45:23.1524542Z 2025-03-17T18:45:23.1524661Z .. warning :: 2025-03-17T18:45:23.1524905Z Gradient accumulation and DDP `no_sync` are currently not supported 2025-03-17T18:45:23.1525037Z with overlapped optimizer. Please ping 2025-03-17T18:45:23.1525281Z https://github.com/pytorch/pytorch/issues/71595 if this is necessary 2025-03-17T18:45:23.1525387Z for your use case. 2025-03-17T18:45:23.1525487Z 2025-03-17T18:45:23.1525582Z Example:: 2025-03-17T18:45:23.1525683Z 2025-03-17T18:45:23.1525823Z >>> # xdoctest: +SKIP("No rendezvous handler") 2025-03-17T18:45:23.1526165Z >>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') 2025-03-17T18:45:23.1526367Z >>> net = torch.nn.parallel.DistributedDataParallel(model, pg) 2025-03-17T18:45:23.1526471Z >>> lr = 1e-2 2025-03-17T18:45:23.1526570Z >>> betas = (0.9, 0.99) 2025-03-17T18:45:23.1526678Z >>> eps = 1e-6 2025-03-17T18:45:23.1526907Z >>> net._register_fused_optim(torch.optim.Adam, lr, betas=betas, eps=eps) 2025-03-17T18:45:23.1527042Z >>> # Example with subset of parameters 2025-03-17T18:45:23.1527182Z >>> params_to_opt = [list(net.parameters())[0]] 2025-03-17T18:45:23.1527304Z >>> net._register_fused_optim( 2025-03-17T18:45:23.1527541Z ... torch.optim.Adam, lr, optim_params=params_to_opt, betas=betas, eps=eps 2025-03-17T18:45:23.1527635Z ... ) 2025-03-17T18:45:23.1527720Z 2025-03-17T18:45:23.1527985Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1528072Z 2025-03-17T18:45:23.1528180Z warnings.warn(msg) 2025-03-17T18:45:23.1528264Z 2025-03-17T18:45:23.1528474Z --- Parse Warning: 98 / 116 --- 2025-03-17T18:45:23.1529492Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=convert_conv2d_weight_memory_format in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/memory_format.py line=6. 2025-03-17T18:45:23.1529773Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1529996Z Convert ``memory_format`` of ``nn.Conv2d.weight`` to ``memory_format``. 2025-03-17T18:45:23.1530120Z 2025-03-17T18:45:23.1530403Z The conversion recursively applies to nested ``nn.Module``, including ``module``. 2025-03-17T18:45:23.1530698Z Note that it only changes the memory_format, but not the semantics of each dimensions. 2025-03-17T18:45:23.1530961Z This function is used to facilitate the computation to adopt NHWC kernels, which 2025-03-17T18:45:23.1531287Z provides considerable speed up for fp16 data on CUDA devices with compute capability >= 7.0 2025-03-17T18:45:23.1531373Z 2025-03-17T18:45:23.1531473Z .. note:: 2025-03-17T18:45:23.1531714Z Calling ``model.to(memory_format=torch.channels_last)`` is more aggressive 2025-03-17T18:45:23.1531957Z than the utility function ``convert_conv2d_weight_memory_format``. Any 2025-03-17T18:45:23.1532174Z layer with 4d weight will be affected by ``model.to``, which does not 2025-03-17T18:45:23.1532413Z necessarily benefit from conversion to specified ``memory_format``. 2025-03-17T18:45:23.1532643Z One place we are confident in is that NHWC(channels_last) conversion for 2025-03-17T18:45:23.1532877Z convolution in cuDNN, as it is beneficial to run convolution in NHWC, 2025-03-17T18:45:23.1533085Z even in cases where we have to apply permutation to input tensors. 2025-03-17T18:45:23.1533182Z 2025-03-17T18:45:23.1533413Z Hence our strategy here is to convert only the weight of convolution to 2025-03-17T18:45:23.1533542Z channels_last. This ensures that; 2025-03-17T18:45:23.1533764Z 1. Fast convolution kernels will be used, the benefit of which could 2025-03-17T18:45:23.1534066Z outweigh overhead of permutation (if input is not in the same format). 2025-03-17T18:45:23.1534307Z 2. No unnecessary permutations are applied on layers that do not benefit 2025-03-17T18:45:23.1534439Z from memory_format conversion. 2025-03-17T18:45:23.1534524Z 2025-03-17T18:45:23.1534769Z The optimal case is that, layers between convolution layers are channels 2025-03-17T18:45:23.1535012Z last compatible. Input tensor would be permuted to channels last when it 2025-03-17T18:45:23.1535260Z encounters the first convolution layer and stay in that memory format. 2025-03-17T18:45:23.1535543Z Hence following convolutions will not need to permute its input tensor. 2025-03-17T18:45:23.1535646Z 2025-03-17T18:45:23.1535874Z In case where a channels last incompatible layer is between convolution 2025-03-17T18:45:23.1536107Z layers, we need to permute the input tensor back to contiguous format 2025-03-17T18:45:23.1536341Z for that layer. The input tensor will go through the remaining layers in 2025-03-17T18:45:23.1536589Z contiguous format and be permuted to channels last when it encounters 2025-03-17T18:45:23.1536984Z another convolution layer. There's no point in propagating that 2025-03-17T18:45:23.1537229Z permutation to an earlier layer, as most layers are quite agnostic to 2025-03-17T18:45:23.1537332Z ``memory_format``. 2025-03-17T18:45:23.1537428Z 2025-03-17T18:45:23.1537665Z This claim might change when PyTorch supports fusion of permutation, as 2025-03-17T18:45:23.1537911Z there might have been a better spot to fuse the permutation other than 2025-03-17T18:45:23.1538033Z immediately before a convolution. 2025-03-17T18:45:23.1538134Z 2025-03-17T18:45:23.1538223Z Args: 2025-03-17T18:45:23.1538451Z module (nn.Module): ``nn.Conv2d`` & ``nn.ConvTranspose2d`` or container 2025-03-17T18:45:23.1538562Z ``nn.Module`` 2025-03-17T18:45:23.1538732Z memory_format: user specified ``memory_format``, 2025-03-17T18:45:23.1538917Z e.g. ``torch.channels_last`` or ``torch.contiguous_format`` 2025-03-17T18:45:23.1539019Z 2025-03-17T18:45:23.1539169Z Returns: 2025-03-17T18:45:23.1539332Z The original module with updated ``nn.Conv2d`` 2025-03-17T18:45:23.1539417Z 2025-03-17T18:45:23.1539523Z Example: 2025-03-17T18:45:23.1539664Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.1539828Z >>> # xdoctest: +REQUIRES(env:CUBLAS_WORKSPACE_CONFIG) 2025-03-17T18:45:23.1540072Z >>> input = torch.randint(1, 10, (2, 8, 4, 4), dtype=torch.float16, device="cuda") 2025-03-17T18:45:23.1540199Z >>> model = nn.Sequential( 2025-03-17T18:45:23.1540315Z >>> nn.Conv2d(8, 4, 3)).cuda().half() 2025-03-17T18:45:23.1540429Z >>> # This is identical to: 2025-03-17T18:45:23.1540686Z >>> # nn.utils.convert_conv2d_weight_memory_format(model, torch.channels_last) 2025-03-17T18:45:23.1540956Z >>> model = nn.utils.convert_conv2d_weight_memory_format(model, torch.channels_last) 2025-03-17T18:45:23.1541070Z >>> out = model(input) 2025-03-17T18:45:23.1541159Z 2025-03-17T18:45:23.1541433Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1541517Z 2025-03-17T18:45:23.1541630Z warnings.warn(msg) 2025-03-17T18:45:23.1541717Z 2025-03-17T18:45:23.1541939Z --- Parse Warning: 99 / 116 --- 2025-03-17T18:45:23.1542961Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=convert_conv3d_weight_memory_format in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/memory_format.py line=81. 2025-03-17T18:45:23.1543330Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1543551Z Convert ``memory_format`` of ``nn.Conv3d.weight`` to ``memory_format`` 2025-03-17T18:45:23.1543843Z The conversion recursively applies to nested ``nn.Module``, including ``module``. 2025-03-17T18:45:23.1544132Z Note that it only changes the memory_format, but not the semantics of each dimensions. 2025-03-17T18:45:23.1544407Z This function is used to facilitate the computation to adopt NHWC kernels, which 2025-03-17T18:45:23.1544737Z provides considerable speed up for fp16 data on CUDA devices with compute capability >= 7.0 2025-03-17T18:45:23.1544854Z 2025-03-17T18:45:23.1544946Z .. note:: 2025-03-17T18:45:23.1545212Z Calling ``model.to(memory_format=torch.channels_last_3d)`` is more aggressive 2025-03-17T18:45:23.1545440Z than the utility function ``convert_conv3d_weight_memory_format``. Any 2025-03-17T18:45:23.1545672Z layer with 4d weight will be affected by ``model.to``, which does not 2025-03-17T18:45:23.1545905Z necessarily benefit from conversion to specified ``memory_format``. 2025-03-17T18:45:23.1546158Z One place we are confident in is that NDHWC(channels_last_3d) conversion for 2025-03-17T18:45:23.1546398Z convolution in cuDNN, as it is beneficial to run convolution in NDHWC, 2025-03-17T18:45:23.1546678Z even in cases where we have to apply permutation to input tensors. 2025-03-17T18:45:23.1546763Z 2025-03-17T18:45:23.1547008Z Hence our strategy here is to convert only the weight of convolution to 2025-03-17T18:45:23.1547136Z channels_last_3d. This ensures that; 2025-03-17T18:45:23.1547371Z 1. Fast convolution kernels will be used, the benefit of which could 2025-03-17T18:45:23.1547615Z outweigh overhead of permutation (if input is not in the same format). 2025-03-17T18:45:23.1547870Z 2. No unnecessary permutations are applied on layers that do not benefit 2025-03-17T18:45:23.1547991Z from memory_format conversion. 2025-03-17T18:45:23.1548087Z 2025-03-17T18:45:23.1548321Z The optimal case is that, layers between convolution layers are channels 2025-03-17T18:45:23.1548599Z last compatible. Input tensor would be permuted to channels last when it 2025-03-17T18:45:23.1548834Z encounters the first convolution layer and stay in that memory format. 2025-03-17T18:45:23.1549092Z Hence following convolutions will not need to permute its input tensor. 2025-03-17T18:45:23.1549180Z 2025-03-17T18:45:23.1549424Z In case where a channels last incompatible layer is between convolution 2025-03-17T18:45:23.1549642Z layers, we need to permute the input tensor back to contiguous format 2025-03-17T18:45:23.1549881Z for that layer. The input tensor will go through the remaining layers in 2025-03-17T18:45:23.1550112Z contiguous format and be permuted to channels last when it encounters 2025-03-17T18:45:23.1550343Z another convolution layer. There's no point in propagating that 2025-03-17T18:45:23.1550567Z permutation to an earlier layer, as most layers are quite agnostic to 2025-03-17T18:45:23.1550680Z ``memory_format``. 2025-03-17T18:45:23.1550766Z 2025-03-17T18:45:23.1551013Z This claim might change when PyTorch supports fusion of permutation, as 2025-03-17T18:45:23.1551241Z there might have been a better spot to fuse the permutation other than 2025-03-17T18:45:23.1551378Z immediately before a convolution. 2025-03-17T18:45:23.1551463Z 2025-03-17T18:45:23.1551563Z Args: 2025-03-17T18:45:23.1551778Z module (nn.Module): ``nn.Conv3d`` & ``nn.ConvTranspose3d`` or container 2025-03-17T18:45:23.1551898Z ``nn.Module`` 2025-03-17T18:45:23.1552078Z memory_format: user specified ``memory_format``, 2025-03-17T18:45:23.1552297Z e.g. ``torch.channels_last`` or ``torch.contiguous_format`` 2025-03-17T18:45:23.1552381Z 2025-03-17T18:45:23.1552483Z Returns: 2025-03-17T18:45:23.1552630Z The original module with updated ``nn.Conv3d`` 2025-03-17T18:45:23.1552726Z 2025-03-17T18:45:23.1552821Z Example: 2025-03-17T18:45:23.1552972Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA) 2025-03-17T18:45:23.1553131Z >>> # xdoctest: +REQUIRES(env:CUBLAS_WORKSPACE_CONFIG) 2025-03-17T18:45:23.1553378Z >>> input = torch.randint(1, 10, (2, 8, 4, 4, 4), dtype=torch.float16, device="cuda") 2025-03-17T18:45:23.1553530Z >>> model = nn.Sequential( 2025-03-17T18:45:23.1553663Z >>> nn.Conv3d(8, 4, 3)).cuda().half() 2025-03-17T18:45:23.1553772Z >>> # This is identical to: 2025-03-17T18:45:23.1554046Z >>> # nn.utils.convert_conv3d_weight_memory_format(model, torch.channels_last_3d) 2025-03-17T18:45:23.1554331Z >>> model = nn.utils.convert_conv3d_weight_memory_format(model, torch.channels_last_3d) 2025-03-17T18:45:23.1554448Z >>> out = model(input) 2025-03-17T18:45:23.1554534Z 2025-03-17T18:45:23.1554806Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1554892Z 2025-03-17T18:45:23.1555007Z warnings.warn(msg) 2025-03-17T18:45:23.1555092Z 2025-03-17T18:45:23.1555314Z --- Parse Warning: 100 / 116 --- 2025-03-17T18:45:23.1556229Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=random_structured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=935. 2025-03-17T18:45:23.1556509Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1556745Z Prune tensor by removing random channels along the specified dimension. 2025-03-17T18:45:23.1556842Z 2025-03-17T18:45:23.1557082Z Prunes tensor corresponding to parameter called ``name`` in ``module`` 2025-03-17T18:45:23.1557313Z by removing the specified ``amount`` of (currently unpruned) channels 2025-03-17T18:45:23.1557460Z along the specified ``dim`` selected at random. 2025-03-17T18:45:23.1557699Z Modifies module in place (and also return the modified module) 2025-03-17T18:45:23.1557792Z by: 2025-03-17T18:45:23.1557886Z 2025-03-17T18:45:23.1558100Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:23.1558334Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:23.1558554Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:23.1558775Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:23.1558876Z ``name+'_orig'``. 2025-03-17T18:45:23.1558975Z 2025-03-17T18:45:23.1559063Z Args: 2025-03-17T18:45:23.1559257Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:23.1559446Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:23.1559552Z will act. 2025-03-17T18:45:23.1559731Z amount (int or float): quantity of parameters to prune. 2025-03-17T18:45:23.1559919Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:23.1560127Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:23.1560275Z absolute number of parameters to prune. 2025-03-17T18:45:23.1560482Z dim (int): index of the dim along which we define channels to prune. 2025-03-17T18:45:23.1560576Z 2025-03-17T18:45:23.1560669Z Returns: 2025-03-17T18:45:23.1560902Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:23.1561016Z 2025-03-17T18:45:23.1561104Z Examples: 2025-03-17T18:45:23.1561243Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1561365Z >>> m = prune.random_structured( 2025-03-17T18:45:23.1561517Z ... nn.Linear(5, 3), 'weight', amount=3, dim=1 2025-03-17T18:45:23.1561604Z ... ) 2025-03-17T18:45:23.1561803Z >>> columns_pruned = int(sum(torch.sum(m.weight, dim=0) == 0)) 2025-03-17T18:45:23.1561916Z >>> print(columns_pruned) 2025-03-17T18:45:23.1562012Z 3 2025-03-17T18:45:23.1562099Z 2025-03-17T18:45:23.1562364Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1562473Z 2025-03-17T18:45:23.1562586Z warnings.warn(msg) 2025-03-17T18:45:23.1562671Z 2025-03-17T18:45:23.1562871Z --- Parse Warning: 101 / 116 --- 2025-03-17T18:45:23.1563765Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ln_structured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=976. 2025-03-17T18:45:23.1564045Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1564363Z Prune tensor by removing channels with the lowest L\ ``n``-norm along the specified dimension. 2025-03-17T18:45:23.1564461Z 2025-03-17T18:45:23.1564701Z Prunes tensor corresponding to parameter called ``name`` in ``module`` 2025-03-17T18:45:23.1564933Z by removing the specified ``amount`` of (currently unpruned) channels 2025-03-17T18:45:23.1565118Z along the specified ``dim`` with the lowest L\ ``n``-norm. 2025-03-17T18:45:23.1565331Z Modifies module in place (and also return the modified module) 2025-03-17T18:45:23.1565421Z by: 2025-03-17T18:45:23.1565515Z 2025-03-17T18:45:23.1565729Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:23.1565969Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:23.1566186Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:23.1566407Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:23.1566506Z ``name+'_orig'``. 2025-03-17T18:45:23.1566633Z 2025-03-17T18:45:23.1566726Z Args: 2025-03-17T18:45:23.1566920Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:23.1567106Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:23.1567212Z will act. 2025-03-17T18:45:23.1567386Z amount (int or float): quantity of parameters to prune. 2025-03-17T18:45:23.1567573Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:23.1567778Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:23.1567928Z absolute number of parameters to prune. 2025-03-17T18:45:23.1568128Z n (int, float, inf, -inf, 'fro', 'nuc'): See documentation of valid 2025-03-17T18:45:23.1568296Z entries for argument ``p`` in :func:`torch.norm`. 2025-03-17T18:45:23.1568502Z dim (int): index of the dim along which we define channels to prune. 2025-03-17T18:45:23.1568757Z importance_scores (torch.Tensor): tensor of importance scores (of same 2025-03-17T18:45:23.1568952Z shape as module parameter) used to compute mask for pruning. 2025-03-17T18:45:23.1569190Z The values in this tensor indicate the importance of the corresponding 2025-03-17T18:45:23.1569329Z elements in the parameter being pruned. 2025-03-17T18:45:23.1569574Z If unspecified or None, the module parameter will be used in its place. 2025-03-17T18:45:23.1569662Z 2025-03-17T18:45:23.1569793Z Returns: 2025-03-17T18:45:23.1570063Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:23.1570161Z 2025-03-17T18:45:23.1570253Z Examples: 2025-03-17T18:45:23.1570391Z >>> from torch.nn.utils import prune 2025-03-17T18:45:23.1570504Z >>> m = prune.ln_structured( 2025-03-17T18:45:23.1570706Z ... nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf') 2025-03-17T18:45:23.1570795Z ... ) 2025-03-17T18:45:23.1570890Z 2025-03-17T18:45:23.1571152Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1571272Z 2025-03-17T18:45:23.1571375Z warnings.warn(msg) 2025-03-17T18:45:23.1571467Z 2025-03-17T18:45:23.1571662Z --- Parse Warning: 102 / 116 --- 2025-03-17T18:45:23.1572594Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=global_unstructured in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=1023. 2025-03-17T18:45:23.1572865Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1572957Z 2025-03-17T18:45:23.1573396Z Globally prunes tensors corresponding to all parameters in ``parameters`` by applying the specified ``pruning_method``. 2025-03-17T18:45:23.1573491Z 2025-03-17T18:45:23.1573607Z Modifies modules in place by: 2025-03-17T18:45:23.1573690Z 2025-03-17T18:45:23.1573912Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:23.1574138Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:23.1574362Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:23.1574578Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:23.1574684Z ``name+'_orig'``. 2025-03-17T18:45:23.1574773Z 2025-03-17T18:45:23.1574869Z Args: 2025-03-17T18:45:23.1575079Z parameters (Iterable of (module, name) tuples): parameters of 2025-03-17T18:45:23.1575286Z the model to prune in a global fashion, i.e. by aggregating all 2025-03-17T18:45:23.1575501Z weights prior to deciding which ones to prune. module must be of 2025-03-17T18:45:23.1575694Z type :class:`nn.Module`, and name must be a string. 2025-03-17T18:45:23.1575925Z pruning_method (function): a valid pruning function from this module, 2025-03-17T18:45:23.1576119Z or a custom one implemented by the user that satisfies the 2025-03-17T18:45:23.1576351Z implementation guidelines and has ``PRUNING_TYPE='unstructured'``. 2025-03-17T18:45:23.1576599Z importance_scores (dict): a dictionary mapping (module, name) tuples to 2025-03-17T18:45:23.1576828Z the corresponding parameter's importance scores tensor. The tensor 2025-03-17T18:45:23.1577051Z should be the same shape as the parameter, and is used for computing 2025-03-17T18:45:23.1577158Z mask for pruning. 2025-03-17T18:45:23.1577383Z If unspecified or None, the parameter will be used in place of its 2025-03-17T18:45:23.1577487Z importance scores. 2025-03-17T18:45:23.1577625Z kwargs: other keyword arguments such as: 2025-03-17T18:45:23.1577832Z amount (int or float): quantity of parameters to prune across the 2025-03-17T18:45:23.1577953Z specified parameters. 2025-03-17T18:45:23.1578130Z If ``float``, should be between 0.0 and 1.0 and represent the 2025-03-17T18:45:23.1578343Z fraction of parameters to prune. If ``int``, it represents the 2025-03-17T18:45:23.1578480Z absolute number of parameters to prune. 2025-03-17T18:45:23.1578574Z 2025-03-17T18:45:23.1578666Z Raises: 2025-03-17T18:45:23.1578830Z TypeError: if ``PRUNING_TYPE != 'unstructured'`` 2025-03-17T18:45:23.1578944Z 2025-03-17T18:45:23.1579039Z Note: 2025-03-17T18:45:23.1579289Z Since global structured pruning doesn't make much sense unless the 2025-03-17T18:45:23.1579506Z norm is normalized by the size of the parameter, we now limit the 2025-03-17T18:45:23.1579661Z scope of global pruning to unstructured methods. 2025-03-17T18:45:23.1579755Z 2025-03-17T18:45:23.1579851Z Examples: 2025-03-17T18:45:23.1579984Z >>> from torch.nn.utils import prune 2025-03-17T18:45:23.1580110Z >>> from collections import OrderedDict 2025-03-17T18:45:23.1580240Z >>> net = nn.Sequential(OrderedDict([ 2025-03-17T18:45:23.1580351Z ... ('first', nn.Linear(10, 4)), 2025-03-17T18:45:23.1580492Z ... ('second', nn.Linear(4, 1)), 2025-03-17T18:45:23.1580579Z ... ])) 2025-03-17T18:45:23.1580701Z >>> parameters_to_prune = ( 2025-03-17T18:45:23.1580808Z ... (net.first, 'weight'), 2025-03-17T18:45:23.1580922Z ... (net.second, 'weight'), 2025-03-17T18:45:23.1581009Z ... ) 2025-03-17T18:45:23.1581135Z >>> prune.global_unstructured( 2025-03-17T18:45:23.1581244Z ... parameters_to_prune, 2025-03-17T18:45:23.1581389Z ... pruning_method=prune.L1Unstructured, 2025-03-17T18:45:23.1581487Z ... amount=10, 2025-03-17T18:45:23.1581582Z ... ) 2025-03-17T18:45:23.1581808Z >>> print(sum(torch.nn.utils.parameters_to_vector(net.buffers()) == 0)) 2025-03-17T18:45:23.1581907Z tensor(10) 2025-03-17T18:45:23.1581992Z 2025-03-17T18:45:23.1582078Z 2025-03-17T18:45:23.1582344Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1582432Z 2025-03-17T18:45:23.1582537Z warnings.warn(msg) 2025-03-17T18:45:23.1582622Z 2025-03-17T18:45:23.1582824Z --- Parse Warning: 103 / 116 --- 2025-03-17T18:45:23.1583731Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=custom_from_mask in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/prune.py line=1142. 2025-03-17T18:45:23.1584006Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1584406Z Prune tensor corresponding to parameter called ``name`` in ``module`` by applying the pre-computed mask in ``mask``. 2025-03-17T18:45:23.1584530Z 2025-03-17T18:45:23.1584748Z Modifies module in place (and also return the modified module) by: 2025-03-17T18:45:23.1584841Z 2025-03-17T18:45:23.1585054Z 1) adding a named buffer called ``name+'_mask'`` corresponding to the 2025-03-17T18:45:23.1585290Z binary mask applied to the parameter ``name`` by the pruning method. 2025-03-17T18:45:23.1585509Z 2) replacing the parameter ``name`` by its pruned version, while the 2025-03-17T18:45:23.1585726Z original (unpruned) parameter is stored in a new parameter named 2025-03-17T18:45:23.1585828Z ``name+'_orig'``. 2025-03-17T18:45:23.1585925Z 2025-03-17T18:45:23.1586011Z Args: 2025-03-17T18:45:23.1586204Z module (nn.Module): module containing the tensor to prune 2025-03-17T18:45:23.1586392Z name (str): parameter name within ``module`` on which pruning 2025-03-17T18:45:23.1586588Z will act. 2025-03-17T18:45:23.1586773Z mask (Tensor): binary mask to be applied to the parameter. 2025-03-17T18:45:23.1586866Z 2025-03-17T18:45:23.1586956Z Returns: 2025-03-17T18:45:23.1587189Z module (nn.Module): modified (i.e. pruned) version of the input module 2025-03-17T18:45:23.1587273Z 2025-03-17T18:45:23.1587375Z Examples: 2025-03-17T18:45:23.1587500Z >>> from torch.nn.utils import prune 2025-03-17T18:45:23.1587622Z >>> m = prune.custom_from_mask( 2025-03-17T18:45:23.1587801Z ... nn.Linear(5, 3), name='bias', mask=torch.tensor([0, 1, 0]) 2025-03-17T18:45:23.1587929Z ... ) 2025-03-17T18:45:23.1588034Z >>> print(m.bias_mask) 2025-03-17T18:45:23.1588167Z tensor([0., 1., 0.]) 2025-03-17T18:45:23.1588251Z 2025-03-17T18:45:23.1588346Z 2025-03-17T18:45:23.1588605Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1588697Z 2025-03-17T18:45:23.1588799Z warnings.warn(msg) 2025-03-17T18:45:23.1588891Z 2025-03-17T18:45:23.1589086Z --- Parse Warning: 104 / 116 --- 2025-03-17T18:45:23.1589993Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=AveragedModel in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py line=117. 2025-03-17T18:45:23.1590290Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1590675Z Implements averaged model for Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA). 2025-03-17T18:45:23.1590764Z 2025-03-17T18:45:23.1591024Z Stochastic Weight Averaging was proposed in `Averaging Weights Leads to 2025-03-17T18:45:23.1591246Z Wider Optima and Better Generalization`_ by Pavel Izmailov, Dmitrii 2025-03-17T18:45:23.1591476Z Podoprikhin, Timur Garipov, Dmitry Vetrov and Andrew Gordon Wilson 2025-03-17T18:45:23.1591566Z (UAI 2018). 2025-03-17T18:45:23.1591662Z 2025-03-17T18:45:23.1591884Z Exponential Moving Average is a variation of `Polyak averaging`_, 2025-03-17T18:45:23.1592140Z but using exponential weights instead of equal weights across iterations. 2025-03-17T18:45:23.1592225Z 2025-03-17T18:45:23.1592472Z AveragedModel class creates a copy of the provided module :attr:`model` 2025-03-17T18:45:23.1592710Z on the device :attr:`device` and allows to compute running averages of the 2025-03-17T18:45:23.1592828Z parameters of the :attr:`model`. 2025-03-17T18:45:23.1592919Z 2025-03-17T18:45:23.1593010Z Args: 2025-03-17T18:45:23.1593176Z model (torch.nn.Module): model to use with SWA/EMA 2025-03-17T18:45:23.1593418Z device (torch.device, optional): if provided, the averaged model will be 2025-03-17T18:45:23.1593539Z stored on the :attr:`device` 2025-03-17T18:45:23.1593773Z avg_fn (function, optional): the averaging function used to update 2025-03-17T18:45:23.1593986Z parameters; the function must take in the current value of the 2025-03-17T18:45:23.1594214Z :class:`AveragedModel` parameter, the current value of :attr:`model` 2025-03-17T18:45:23.1594425Z parameter, and the number of models already averaged; if None, 2025-03-17T18:45:23.1594591Z an equally weighted average is used (default: None) 2025-03-17T18:45:23.1594827Z multi_avg_fn (function, optional): the averaging function used to update 2025-03-17T18:45:23.1595070Z parameters inplace; the function must take in the current values of the 2025-03-17T18:45:23.1595356Z :class:`AveragedModel` parameters as a list, the current values of :attr:`model` 2025-03-17T18:45:23.1595591Z parameters as a list, and the number of models already averaged; if None, 2025-03-17T18:45:23.1595764Z an equally weighted average is used (default: None) 2025-03-17T18:45:23.1595978Z use_buffers (bool): if ``True``, it will compute running averages for 2025-03-17T18:45:23.1596214Z both the parameters and the buffers of the model. (default: ``False``) 2025-03-17T18:45:23.1596298Z 2025-03-17T18:45:23.1596398Z Example: 2025-03-17T18:45:23.1596535Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1596673Z >>> loader, optimizer, model, loss_fn = ... 2025-03-17T18:45:23.1596849Z >>> swa_model = torch.optim.swa_utils.AveragedModel(model) 2025-03-17T18:45:23.1597085Z >>> scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, 2025-03-17T18:45:23.1597263Z >>> T_max=300) 2025-03-17T18:45:23.1597373Z >>> swa_start = 160 2025-03-17T18:45:23.1597519Z >>> swa_scheduler = SWALR(optimizer, swa_lr=0.05) 2025-03-17T18:45:23.1597629Z >>> for i in range(300): 2025-03-17T18:45:23.1597751Z >>> for input, target in loader: 2025-03-17T18:45:23.1597877Z >>> optimizer.zero_grad() 2025-03-17T18:45:23.1598018Z >>> loss_fn(model(input), target).backward() 2025-03-17T18:45:23.1598134Z >>> optimizer.step() 2025-03-17T18:45:23.1598264Z >>> if i > swa_start: 2025-03-17T18:45:23.1598406Z >>> swa_model.update_parameters(model) 2025-03-17T18:45:23.1598524Z >>> swa_scheduler.step() 2025-03-17T18:45:23.1598624Z >>> else: 2025-03-17T18:45:23.1598735Z >>> scheduler.step() 2025-03-17T18:45:23.1598836Z >>> 2025-03-17T18:45:23.1598998Z >>> # Update bn statistics for the swa_model at the end 2025-03-17T18:45:23.1599174Z >>> torch.optim.swa_utils.update_bn(loader, swa_model) 2025-03-17T18:45:23.1599261Z 2025-03-17T18:45:23.1599580Z You can also use custom averaging functions with the `avg_fn` or `multi_avg_fn` parameters. 2025-03-17T18:45:23.1599782Z If no averaging function is provided, the default is to compute 2025-03-17T18:45:23.1599943Z equally-weighted average of the weights (SWA). 2025-03-17T18:45:23.1600031Z 2025-03-17T18:45:23.1600131Z Example: 2025-03-17T18:45:23.1600267Z >>> # xdoctest: +SKIP("undefined variables") 2025-03-17T18:45:23.1600491Z >>> # Compute exponential moving averages of the weights and buffers 2025-03-17T18:45:23.1600664Z >>> ema_model = torch.optim.swa_utils.AveragedModel(model, 2025-03-17T18:45:23.1600895Z >>> torch.optim.swa_utils.get_ema_multi_avg_fn(0.9), use_buffers=True) 2025-03-17T18:45:23.1600980Z 2025-03-17T18:45:23.1601082Z .. note:: 2025-03-17T18:45:23.1601304Z When using SWA/EMA with models containing Batch Normalization you may 2025-03-17T18:45:23.1601529Z need to update the activation statistics for Batch Normalization. 2025-03-17T18:45:23.1601797Z This can be done either by using the :meth:`torch.optim.swa_utils.update_bn` 2025-03-17T18:45:23.1602041Z or by setting :attr:`use_buffers` to `True`. The first approach updates the 2025-03-17T18:45:23.1602288Z statistics in a post-training step by passing data through the model. The 2025-03-17T18:45:23.1602543Z second does it during the parameter update phase by averaging all buffers. 2025-03-17T18:45:23.1602794Z Empirical evidence has shown that updating the statistics in normalization 2025-03-17T18:45:23.1603036Z layers increases accuracy, but you may wish to empirically test which 2025-03-17T18:45:23.1603198Z approach yields the best results in your problem. 2025-03-17T18:45:23.1603293Z 2025-03-17T18:45:23.1603382Z .. note:: 2025-03-17T18:45:23.1603649Z :attr:`avg_fn` and `multi_avg_fn` are not saved in the :meth:`state_dict` of the model. 2025-03-17T18:45:23.1603733Z 2025-03-17T18:45:23.1603835Z .. note:: 2025-03-17T18:45:23.1604040Z When :meth:`update_parameters` is called for the first time (i.e. 2025-03-17T18:45:23.1604240Z :attr:`n_averaged` is `0`) the parameters of `model` are copied 2025-03-17T18:45:23.1604447Z to the parameters of :class:`AveragedModel`. For every subsequent 2025-03-17T18:45:23.1604657Z call of :meth:`update_parameters` the function `avg_fn` is used 2025-03-17T18:45:23.1604768Z to update the parameters. 2025-03-17T18:45:23.1604865Z 2025-03-17T18:45:23.1605091Z .. _Averaging Weights Leads to Wider Optima and Better Generalization: 2025-03-17T18:45:23.1605295Z https://arxiv.org/abs/1803.05407 2025-03-17T18:45:23.1605539Z .. _There Are Many Consistent Explanations of Unlabeled Data: Why You Should 2025-03-17T18:45:23.1605643Z Average: 2025-03-17T18:45:23.1605765Z https://arxiv.org/abs/1806.05594 2025-03-17T18:45:23.1605985Z .. _SWALP: Stochastic Weight Averaging in Low-Precision Training: 2025-03-17T18:45:23.1606105Z https://arxiv.org/abs/1904.11943 2025-03-17T18:45:23.1606349Z .. _Stochastic Weight Averaging in Parallel: Large-Batch Training That 2025-03-17T18:45:23.1606455Z Generalizes Well: 2025-03-17T18:45:23.1606616Z https://arxiv.org/abs/2001.02312 2025-03-17T18:45:23.1606718Z .. _Polyak averaging: 2025-03-17T18:45:23.1606906Z https://paperswithcode.com/method/polyak-averaging 2025-03-17T18:45:23.1606992Z 2025-03-17T18:45:23.1607261Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1607347Z 2025-03-17T18:45:23.1607453Z warnings.warn(msg) 2025-03-17T18:45:23.1607547Z 2025-03-17T18:45:23.1607750Z --- Parse Warning: 105 / 116 --- 2025-03-17T18:45:23.1608626Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SWALR in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/optim/swa_utils.py line=369. 2025-03-17T18:45:23.1608894Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1609126Z Anneals the learning rate in each parameter group to a fixed value. 2025-03-17T18:45:23.1609214Z 2025-03-17T18:45:23.1609465Z This learning rate scheduler is meant to be used with Stochastic Weight 2025-03-17T18:45:23.1609682Z Averaging (SWA) method (see `torch.optim.swa_utils.AveragedModel`). 2025-03-17T18:45:23.1609779Z 2025-03-17T18:45:23.1609873Z Args: 2025-03-17T18:45:23.1610062Z optimizer (torch.optim.Optimizer): wrapped optimizer 2025-03-17T18:45:23.1610276Z swa_lrs (float or list): the learning rate value for all param groups 2025-03-17T18:45:23.1610424Z together or separately for each group. 2025-03-17T18:45:23.1610660Z annealing_epochs (int): number of epochs in the annealing phase 2025-03-17T18:45:23.1610772Z (default: 10) 2025-03-17T18:45:23.1610992Z annealing_strategy (str): "cos" or "linear"; specifies the annealing 2025-03-17T18:45:23.1611210Z strategy: "cos" for cosine annealing, "linear" for linear annealing 2025-03-17T18:45:23.1611318Z (default: "cos") 2025-03-17T18:45:23.1611514Z last_epoch (int): the index of the last epoch (default: -1) 2025-03-17T18:45:23.1611598Z 2025-03-17T18:45:23.1611791Z The :class:`SWALR` scheduler can be used together with other 2025-03-17T18:45:23.1612018Z schedulers to switch to a constant learning rate late in the training 2025-03-17T18:45:23.1612132Z as in the example below. 2025-03-17T18:45:23.1612214Z 2025-03-17T18:45:23.1612314Z Example: 2025-03-17T18:45:23.1612450Z >>> # xdoctest: +SKIP("Undefined variables") 2025-03-17T18:45:23.1612580Z >>> loader, optimizer, model = ... 2025-03-17T18:45:23.1612702Z >>> lr_lambda = lambda epoch: 0.9 2025-03-17T18:45:23.1612935Z >>> scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 2025-03-17T18:45:23.1613043Z >>> lr_lambda=lr_lambda) 2025-03-17T18:45:23.1613224Z >>> swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, 2025-03-17T18:45:23.1613409Z >>> anneal_strategy="linear", anneal_epochs=20, swa_lr=0.05) 2025-03-17T18:45:23.1613517Z >>> swa_start = 160 2025-03-17T18:45:23.1613620Z >>> for i in range(300): 2025-03-17T18:45:23.1613747Z >>> for input, target in loader: 2025-03-17T18:45:23.1613892Z >>> optimizer.zero_grad() 2025-03-17T18:45:23.1614061Z >>> loss_fn(model(input), target).backward() 2025-03-17T18:45:23.1614174Z >>> optimizer.step() 2025-03-17T18:45:23.1614286Z >>> if i > swa_start: 2025-03-17T18:45:23.1614402Z >>> swa_scheduler.step() 2025-03-17T18:45:23.1614504Z >>> else: 2025-03-17T18:45:23.1614613Z >>> scheduler.step() 2025-03-17T18:45:23.1614706Z 2025-03-17T18:45:23.1614934Z .. _Averaging Weights Leads to Wider Optima and Better Generalization: 2025-03-17T18:45:23.1615092Z https://arxiv.org/abs/1803.05407 2025-03-17T18:45:23.1615177Z 2025-03-17T18:45:23.1615445Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1615529Z 2025-03-17T18:45:23.1615639Z warnings.warn(msg) 2025-03-17T18:45:23.1615722Z 2025-03-17T18:45:23.1615919Z --- Parse Warning: 106 / 116 --- 2025-03-17T18:45:23.1616851Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=assert_close in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/testing/_comparison.py line=1263. 2025-03-17T18:45:23.1617127Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1617280Z Asserts that ``actual`` and ``expected`` are close. 2025-03-17T18:45:23.1617372Z 2025-03-17T18:45:23.1617745Z If ``actual`` and ``expected`` are strided, non-quantized, real-valued, and finite, they are considered close if 2025-03-17T18:45:23.1617841Z 2025-03-17T18:45:23.1617931Z .. math:: 2025-03-17T18:45:23.1618017Z 2025-03-17T18:45:23.1618392Z \lvert \text{actual} - \text{expected} \rvert \le \texttt{atol} + \texttt{rtol} \cdot \lvert \text{expected} \rvert 2025-03-17T18:45:23.1618477Z 2025-03-17T18:45:23.1618836Z Non-finite values (``-inf`` and ``inf``) are only considered close if and only if they are equal. ``NaN``'s are 2025-03-17T18:45:23.1619045Z only considered equal to each other if ``equal_nan`` is ``True``. 2025-03-17T18:45:23.1619136Z 2025-03-17T18:45:23.1619341Z In addition, they are only considered close if they have the same 2025-03-17T18:45:23.1619458Z 2025-03-17T18:45:23.1619662Z - :attr:`~torch.Tensor.device` (if ``check_device`` is ``True``), 2025-03-17T18:45:23.1619806Z - ``dtype`` (if ``check_dtype`` is ``True``), 2025-03-17T18:45:23.1619952Z - ``layout`` (if ``check_layout`` is ``True``), and 2025-03-17T18:45:23.1620093Z - stride (if ``check_stride`` is ``True``). 2025-03-17T18:45:23.1620176Z 2025-03-17T18:45:23.1620489Z If either ``actual`` or ``expected`` is a meta tensor, only the attribute checks will be performed. 2025-03-17T18:45:23.1620573Z 2025-03-17T18:45:23.1620950Z If ``actual`` and ``expected`` are sparse (either having COO, CSR, CSC, BSR, or BSC layout), their strided members are 2025-03-17T18:45:23.1621332Z checked individually. Indices, namely ``indices`` for COO, ``crow_indices`` and ``col_indices`` for CSR and BSR, 2025-03-17T18:45:23.1621574Z or ``ccol_indices`` and ``row_indices`` for CSC and BSC layouts, respectively, 2025-03-17T18:45:23.1621970Z are always checked for equality whereas the values are checked for closeness according to the definition above. 2025-03-17T18:45:23.1622066Z 2025-03-17T18:45:23.1622352Z If ``actual`` and ``expected`` are quantized, they are considered close if they have the same 2025-03-17T18:45:23.1622723Z :meth:`~torch.Tensor.qscheme` and the result of :meth:`~torch.Tensor.dequantize` is close according to the 2025-03-17T18:45:23.1622825Z definition above. 2025-03-17T18:45:23.1622917Z 2025-03-17T18:45:23.1623226Z ``actual`` and ``expected`` can be :class:`~torch.Tensor`'s or any tensor-or-scalar-likes from which 2025-03-17T18:45:23.1623668Z :class:`torch.Tensor`'s can be constructed with :func:`torch.as_tensor`. Except for Python scalars the input types 2025-03-17T18:45:23.1624044Z have to be directly related. In addition, ``actual`` and ``expected`` can be :class:`~collections.abc.Sequence`'s 2025-03-17T18:45:23.1624443Z or :class:`~collections.abc.Mapping`'s in which case they are considered close if their structure matches and all 2025-03-17T18:45:23.1624691Z their elements are considered close according to the above definition. 2025-03-17T18:45:23.1624773Z 2025-03-17T18:45:23.1624864Z .. note:: 2025-03-17T18:45:23.1624981Z 2025-03-17T18:45:23.1625321Z Python scalars are an exception to the type relation requirement, because their :func:`type`, i.e. 2025-03-17T18:45:23.1625658Z :class:`int`, :class:`float`, and :class:`complex`, is equivalent to the ``dtype`` of a tensor-like. Thus, 2025-03-17T18:45:23.1625948Z Python scalars of different types can be checked, but require ``check_dtype=False``. 2025-03-17T18:45:23.1626044Z 2025-03-17T18:45:23.1626133Z Args: 2025-03-17T18:45:23.1626258Z actual (Any): Actual input. 2025-03-17T18:45:23.1626377Z expected (Any): Expected input. 2025-03-17T18:45:23.1627126Z allow_subclasses (bool): If ``True`` (default) and except for Python scalars, inputs of directly related types 2025-03-17T18:45:23.1627298Z are allowed. Otherwise type equality is required. 2025-03-17T18:45:23.1627683Z rtol (Optional[float]): Relative tolerance. If specified ``atol`` must also be specified. If omitted, default 2025-03-17T18:45:23.1627955Z values based on the :attr:`~torch.Tensor.dtype` are selected with the below table. 2025-03-17T18:45:23.1628336Z atol (Optional[float]): Absolute tolerance. If specified ``rtol`` must also be specified. If omitted, default 2025-03-17T18:45:23.1628616Z values based on the :attr:`~torch.Tensor.dtype` are selected with the below table. 2025-03-17T18:45:23.1628866Z equal_nan (Union[bool, str]): If ``True``, two ``NaN`` values will be considered equal. 2025-03-17T18:45:23.1629174Z check_device (bool): If ``True`` (default), asserts that corresponding tensors are on the same 2025-03-17T18:45:23.1629464Z :attr:`~torch.Tensor.device`. If this check is disabled, tensors on different 2025-03-17T18:45:23.1629716Z :attr:`~torch.Tensor.device`'s are moved to the CPU before being compared. 2025-03-17T18:45:23.1630070Z check_dtype (bool): If ``True`` (default), asserts that corresponding tensors have the same ``dtype``. If this 2025-03-17T18:45:23.1630432Z check is disabled, tensors with different ``dtype``'s are promoted to a common ``dtype`` (according to 2025-03-17T18:45:23.1630595Z :func:`torch.promote_types`) before being compared. 2025-03-17T18:45:23.1630969Z check_layout (bool): If ``True`` (default), asserts that corresponding tensors have the same ``layout``. If this 2025-03-17T18:45:23.1631305Z check is disabled, tensors with different ``layout``'s are converted to strided tensors before being 2025-03-17T18:45:23.1631415Z compared. 2025-03-17T18:45:23.1631782Z check_stride (bool): If ``True`` and corresponding tensors are strided, asserts that they have the same stride. 2025-03-17T18:45:23.1632154Z msg (Optional[Union[str, Callable[[str], str]]]): Optional error message to use in case a failure occurs during 2025-03-17T18:45:23.1632515Z the comparison. Can also passed as callable in which case it will be called with the generated message and 2025-03-17T18:45:23.1632649Z should return the new message. 2025-03-17T18:45:23.1632732Z 2025-03-17T18:45:23.1632830Z Raises: 2025-03-17T18:45:23.1633067Z ValueError: If no :class:`torch.Tensor` can be constructed from an input. 2025-03-17T18:45:23.1633303Z ValueError: If only ``rtol`` or ``atol`` is specified. 2025-03-17T18:45:23.1633638Z AssertionError: If corresponding inputs are not Python scalars and are not directly related. 2025-03-17T18:45:23.1634023Z AssertionError: If ``allow_subclasses`` is ``False``, but corresponding inputs are not Python scalars and have 2025-03-17T18:45:23.1634131Z different types. 2025-03-17T18:45:23.1634511Z AssertionError: If the inputs are :class:`~collections.abc.Sequence`'s, but their length does not match. 2025-03-17T18:45:23.1634880Z AssertionError: If the inputs are :class:`~collections.abc.Mapping`'s, but their set of keys do not match. 2025-03-17T18:45:23.1635245Z AssertionError: If corresponding tensors do not have the same :attr:`~torch.Tensor.shape`. 2025-03-17T18:45:23.1635548Z AssertionError: If ``check_layout`` is ``True``, but corresponding tensors do not have the same 2025-03-17T18:45:23.1635682Z :attr:`~torch.Tensor.layout`. 2025-03-17T18:45:23.1635910Z AssertionError: If only one of corresponding tensors is quantized. 2025-03-17T18:45:23.1636320Z AssertionError: If corresponding tensors are quantized, but have different :meth:`~torch.Tensor.qscheme`'s. 2025-03-17T18:45:23.1636622Z AssertionError: If ``check_device`` is ``True``, but corresponding tensors are not on the same 2025-03-17T18:45:23.1636944Z :attr:`~torch.Tensor.device`. 2025-03-17T18:45:23.1637285Z AssertionError: If ``check_dtype`` is ``True``, but corresponding tensors do not have the same ``dtype``. 2025-03-17T18:45:23.1637663Z AssertionError: If ``check_stride`` is ``True``, but corresponding strided tensors do not have the same stride. 2025-03-17T18:45:23.1638036Z AssertionError: If the values of corresponding tensors are not close according to the definition above. 2025-03-17T18:45:23.1638132Z 2025-03-17T18:45:23.1638502Z The following table displays the default ``rtol`` and ``atol`` for different ``dtype``'s. In case of mismatching 2025-03-17T18:45:23.1638667Z ``dtype``'s, the maximum of both tolerances is used. 2025-03-17T18:45:23.1638752Z 2025-03-17T18:45:23.1638892Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1639096Z | ``dtype`` | ``rtol`` | ``atol`` | 2025-03-17T18:45:23.1639220Z +===========================+============+==========+ 2025-03-17T18:45:23.1639363Z | :attr:`~torch.float16` | ``1e-3`` | ``1e-5`` | 2025-03-17T18:45:23.1639501Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1639641Z | :attr:`~torch.bfloat16` | ``1.6e-2`` | ``1e-5`` | 2025-03-17T18:45:23.1639779Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1639917Z | :attr:`~torch.float32` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1640056Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1640196Z | :attr:`~torch.float64` | ``1e-7`` | ``1e-7`` | 2025-03-17T18:45:23.1640330Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1640472Z | :attr:`~torch.complex32` | ``1e-3`` | ``1e-5`` | 2025-03-17T18:45:23.1640605Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1640745Z | :attr:`~torch.complex64` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1640879Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1641019Z | :attr:`~torch.complex128` | ``1e-7`` | ``1e-7`` | 2025-03-17T18:45:23.1641165Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1641305Z | :attr:`~torch.quint8` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1641443Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1641622Z | :attr:`~torch.quint2x4` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1641792Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1641933Z | :attr:`~torch.quint4x2` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1642071Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1642212Z | :attr:`~torch.qint8` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1642353Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1642492Z | :attr:`~torch.qint32` | ``1.3e-6`` | ``1e-5`` | 2025-03-17T18:45:23.1642629Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1642788Z | other | ``0.0`` | ``0.0`` | 2025-03-17T18:45:23.1642926Z +---------------------------+------------+----------+ 2025-03-17T18:45:23.1643011Z 2025-03-17T18:45:23.1643119Z .. note:: 2025-03-17T18:45:23.1643206Z 2025-03-17T18:45:23.1643615Z :func:`~torch.testing.assert_close` is highly configurable with strict default settings. Users are encouraged 2025-03-17T18:45:23.1643980Z to :func:`~functools.partial` it to fit their use case. For example, if an equality check is needed, one might 2025-03-17T18:45:23.1644261Z define an ``assert_equal`` that uses zero tolerances for every ``dtype`` by default: 2025-03-17T18:45:23.1644349Z 2025-03-17T18:45:23.1644464Z >>> import functools 2025-03-17T18:45:23.1644730Z >>> assert_equal = functools.partial(torch.testing.assert_close, rtol=0, atol=0) 2025-03-17T18:45:23.1644854Z >>> assert_equal(1e-9, 1e-10) 2025-03-17T18:45:23.1644982Z Traceback (most recent call last): 2025-03-17T18:45:23.1645078Z ... 2025-03-17T18:45:23.1645208Z AssertionError: Scalars are not equal! 2025-03-17T18:45:23.1645315Z 2025-03-17T18:45:23.1645432Z Expected 1e-10 but got 1e-09. 2025-03-17T18:45:23.1645576Z Absolute difference: 9.000000000000001e-10 2025-03-17T18:45:23.1645690Z Relative difference: 9.0 2025-03-17T18:45:23.1645787Z 2025-03-17T18:45:23.1645880Z Examples: 2025-03-17T18:45:23.1646011Z >>> # tensor to tensor comparison 2025-03-17T18:45:23.1646151Z >>> expected = torch.tensor([1e0, 1e-1, 1e-2]) 2025-03-17T18:45:23.1646357Z >>> actual = torch.acos(torch.cos(expected)) 2025-03-17T18:45:23.1646508Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1646603Z 2025-03-17T18:45:23.1646720Z >>> # scalar to scalar comparison 2025-03-17T18:45:23.1646822Z >>> import math 2025-03-17T18:45:23.1646945Z >>> expected = math.sqrt(2.0) 2025-03-17T18:45:23.1647062Z >>> actual = 2.0 / math.sqrt(2.0) 2025-03-17T18:45:23.1647216Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1647301Z 2025-03-17T18:45:23.1647443Z >>> # numpy array to numpy array comparison 2025-03-17T18:45:23.1647550Z >>> import numpy as np 2025-03-17T18:45:23.1647685Z >>> expected = np.array([1e0, 1e-1, 1e-2]) 2025-03-17T18:45:23.1647813Z >>> actual = np.arccos(np.cos(expected)) 2025-03-17T18:45:23.1647974Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1648061Z 2025-03-17T18:45:23.1648197Z >>> # sequence to sequence comparison 2025-03-17T18:45:23.1648303Z >>> import numpy as np 2025-03-17T18:45:23.1648570Z >>> # The types of the sequences do not have to match. They only have to have the same 2025-03-17T18:45:23.1648706Z >>> # length and their elements have to match. 2025-03-17T18:45:23.1648873Z >>> expected = [torch.tensor([1.0]), 2.0, np.array(3.0)] 2025-03-17T18:45:23.1648986Z >>> actual = tuple(expected) 2025-03-17T18:45:23.1649141Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1649252Z 2025-03-17T18:45:23.1649411Z >>> # mapping to mapping comparison 2025-03-17T18:45:23.1649542Z >>> from collections import OrderedDict 2025-03-17T18:45:23.1649658Z >>> import numpy as np 2025-03-17T18:45:23.1649767Z >>> foo = torch.tensor(1.0) 2025-03-17T18:45:23.1649872Z >>> bar = 2.0 2025-03-17T18:45:23.1649980Z >>> baz = np.array(3.0) 2025-03-17T18:45:23.1650254Z >>> # The types and a possible ordering of mappings do not have to match. They only 2025-03-17T18:45:23.1650463Z >>> # have to have the same set of keys and their elements have to match. 2025-03-17T18:45:23.1650709Z >>> expected = OrderedDict([("foo", foo), ("bar", bar), ("baz", baz)]) 2025-03-17T18:45:23.1650850Z >>> actual = {"baz": baz, "bar": bar, "foo": foo} 2025-03-17T18:45:23.1651013Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1651100Z 2025-03-17T18:45:23.1651240Z >>> expected = torch.tensor([1.0, 2.0, 3.0]) 2025-03-17T18:45:23.1651356Z >>> actual = expected.clone() 2025-03-17T18:45:23.1651538Z >>> # By default, directly related instances can be compared 2025-03-17T18:45:23.1651760Z >>> torch.testing.assert_close(torch.nn.Parameter(actual), expected) 2025-03-17T18:45:23.1651970Z >>> # This check can be made more strict with allow_subclasses=False 2025-03-17T18:45:23.1652088Z >>> torch.testing.assert_close( 2025-03-17T18:45:23.1652300Z ... torch.nn.Parameter(actual), expected, allow_subclasses=False 2025-03-17T18:45:23.1652389Z ... ) 2025-03-17T18:45:23.1652519Z Traceback (most recent call last): 2025-03-17T18:45:23.1652605Z ... 2025-03-17T18:45:23.1652826Z TypeError: No comparison pair was able to handle inputs of type 2025-03-17T18:45:23.1653049Z and . 2025-03-17T18:45:23.1653294Z >>> # If the inputs are not directly related, they are never considered close 2025-03-17T18:45:23.1653471Z >>> torch.testing.assert_close(actual.numpy(), expected) 2025-03-17T18:45:23.1653603Z Traceback (most recent call last): 2025-03-17T18:45:23.1653691Z ... 2025-03-17T18:45:23.1654024Z TypeError: No comparison pair was able to handle inputs of type 2025-03-17T18:45:23.1654140Z and . 2025-03-17T18:45:23.1654423Z >>> # Exceptions to these rules are Python scalars. They can be checked regardless of 2025-03-17T18:45:23.1654551Z >>> # their type if check_dtype=False. 2025-03-17T18:45:23.1654737Z >>> torch.testing.assert_close(1.0, 1, check_dtype=False) 2025-03-17T18:45:23.1654824Z 2025-03-17T18:45:23.1654942Z >>> # NaN != NaN by default. 2025-03-17T18:45:23.1655072Z >>> expected = torch.tensor(float("Nan")) 2025-03-17T18:45:23.1655198Z >>> actual = expected.clone() 2025-03-17T18:45:23.1655350Z >>> torch.testing.assert_close(actual, expected) 2025-03-17T18:45:23.1655483Z Traceback (most recent call last): 2025-03-17T18:45:23.1655571Z ... 2025-03-17T18:45:23.1655710Z AssertionError: Scalars are not close! 2025-03-17T18:45:23.1655808Z 2025-03-17T18:45:23.1655919Z Expected nan but got nan. 2025-03-17T18:45:23.1656075Z Absolute difference: nan (up to 1e-05 allowed) 2025-03-17T18:45:23.1656227Z Relative difference: nan (up to 1.3e-06 allowed) 2025-03-17T18:45:23.1656438Z >>> torch.testing.assert_close(actual, expected, equal_nan=True) 2025-03-17T18:45:23.1656523Z 2025-03-17T18:45:23.1656661Z >>> expected = torch.tensor([1.0, 2.0, 3.0]) 2025-03-17T18:45:23.1656786Z >>> actual = torch.tensor([1.0, 4.0, 5.0]) 2025-03-17T18:45:23.1656970Z >>> # The default error message can be overwritten. 2025-03-17T18:45:23.1657294Z >>> torch.testing.assert_close(actual, expected, msg="Argh, the tensors are not close!") 2025-03-17T18:45:23.1657428Z Traceback (most recent call last): 2025-03-17T18:45:23.1657517Z ... 2025-03-17T18:45:23.1657684Z AssertionError: Argh, the tensors are not close! 2025-03-17T18:45:23.1657911Z >>> # If msg is a callable, it can be used to augment the generated message with 2025-03-17T18:45:23.1658027Z >>> # extra information 2025-03-17T18:45:23.1658145Z >>> torch.testing.assert_close( 2025-03-17T18:45:23.1658389Z ... actual, expected, msg=lambda msg: f"Header\n\n{msg}\n\nFooter" 2025-03-17T18:45:23.1658476Z ... ) 2025-03-17T18:45:23.1658604Z Traceback (most recent call last): 2025-03-17T18:45:23.1658691Z ... 2025-03-17T18:45:23.1658808Z AssertionError: Header 2025-03-17T18:45:23.1658904Z 2025-03-17T18:45:23.1659026Z Tensor-likes are not close! 2025-03-17T18:45:23.1659119Z 2025-03-17T18:45:23.1659247Z Mismatched elements: 2 / 3 (66.7%) 2025-03-17T18:45:23.1659477Z Greatest absolute difference: 2.0 at index (1,) (up to 1e-05 allowed) 2025-03-17T18:45:23.1659724Z Greatest relative difference: 1.0 at index (1,) (up to 1.3e-06 allowed) 2025-03-17T18:45:23.1659817Z 2025-03-17T18:45:23.1659917Z Footer 2025-03-17T18:45:23.1660002Z 2025-03-17T18:45:23.1660274Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1660358Z 2025-03-17T18:45:23.1660473Z warnings.warn(msg) 2025-03-17T18:45:23.1660557Z 2025-03-17T18:45:23.1660801Z --- Parse Warning: 107 / 116 --- 2025-03-17T18:45:23.1661747Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=register_pytree_node in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/_cxx_pytree.py line=104. 2025-03-17T18:45:23.1662024Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1662170Z Register a container-like type as pytree node. 2025-03-17T18:45:23.1662266Z 2025-03-17T18:45:23.1662383Z Args: 2025-03-17T18:45:23.1662592Z cls (type): A Python type to treat as an internal pytree node. 2025-03-17T18:45:23.1662871Z flatten_fn (callable): A function to be used during flattening, taking an instance of 2025-03-17T18:45:23.1663139Z ``cls`` and returning a pair, with (1) an iterable for the children to be flattened 2025-03-17T18:45:23.1663437Z recursively, and (2) some hashable auxiliary data to be stored in the treespec and to be 2025-03-17T18:45:23.1663571Z passed to the ``unflatten_fn``. 2025-03-17T18:45:23.1663858Z unflatten_fn (callable): A function taking two arguments: the auxiliary data that was 2025-03-17T18:45:23.1664138Z returned by ``flatten_fn`` and stored in the treespec, and the unflattened children. 2025-03-17T18:45:23.1664299Z The function should return an instance of ``cls``. 2025-03-17T18:45:23.1664588Z serialized_type_name (str, optional): A keyword argument used to specify the fully 2025-03-17T18:45:23.1664758Z qualified name used when serializing the tree spec. 2025-03-17T18:45:23.1665087Z to_dumpable_context (callable, optional): An optional keyword argument to custom specify how 2025-03-17T18:45:23.1665373Z to convert the context of the pytree to a custom json dumpable representation. This is 2025-03-17T18:45:23.1665662Z used for json serialization, which is being used in :mod:`torch.export` right now. 2025-03-17T18:45:23.1665971Z from_dumpable_context (callable, optional): An optional keyword argument to custom specify 2025-03-17T18:45:23.1666299Z how to convert the custom json dumpable representation of the context back to the 2025-03-17T18:45:23.1666662Z original context. This is used for json deserialization, which is being used in 2025-03-17T18:45:23.1666795Z :mod:`torch.export` right now. 2025-03-17T18:45:23.1666881Z 2025-03-17T18:45:23.1666990Z Example:: 2025-03-17T18:45:23.1667073Z 2025-03-17T18:45:23.1667186Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1667335Z >>> # Registry a Python type with lambda functions 2025-03-17T18:45:23.1667456Z >>> register_pytree_node( 2025-03-17T18:45:23.1667591Z ... set, 2025-03-17T18:45:23.1667725Z ... lambda s: (sorted(s), None, None), 2025-03-17T18:45:23.1667855Z ... lambda children, _: set(children), 2025-03-17T18:45:23.1667950Z ... ) 2025-03-17T18:45:23.1668040Z 2025-03-17T18:45:23.1668311Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1668394Z 2025-03-17T18:45:23.1668505Z warnings.warn(msg) 2025-03-17T18:45:23.1668588Z 2025-03-17T18:45:23.1668794Z --- Parse Warning: 108 / 116 --- 2025-03-17T18:45:23.1669800Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SelectiveCheckpointContext in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py line=1200. 2025-03-17T18:45:23.1670081Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1670169Z 2025-03-17T18:45:23.1670405Z Context passed to policy function during selective checkpointing. 2025-03-17T18:45:23.1670488Z 2025-03-17T18:45:23.1670736Z This class is used to pass relevant metadata to the policy function during 2025-03-17T18:45:23.1671012Z selective checkpointing. The metadata includes whether the current invocation 2025-03-17T18:45:23.1671199Z of the policy function is during recomputation or not. 2025-03-17T18:45:23.1671285Z 2025-03-17T18:45:23.1671382Z Example: 2025-03-17T18:45:23.1671501Z >>> # xdoctest: +SKIP(stub) 2025-03-17T18:45:23.1671591Z >>> 2025-03-17T18:45:23.1671765Z >>> def policy_fn(ctx, op, *args, **kwargs): 2025-03-17T18:45:23.1671883Z >>> print(ctx.is_recompute) 2025-03-17T18:45:23.1671982Z >>> 2025-03-17T18:45:23.1672264Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, policy_fn) 2025-03-17T18:45:23.1672365Z >>> 2025-03-17T18:45:23.1672513Z >>> out = torch.utils.checkpoint.checkpoint( 2025-03-17T18:45:23.1672626Z >>> fn, x, y, 2025-03-17T18:45:23.1672734Z >>> use_reentrant=False, 2025-03-17T18:45:23.1672856Z >>> context_fn=context_fn, 2025-03-17T18:45:23.1672945Z >>> ) 2025-03-17T18:45:23.1673051Z 2025-03-17T18:45:23.1673311Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1673414Z 2025-03-17T18:45:23.1673517Z warnings.warn(msg) 2025-03-17T18:45:23.1673616Z 2025-03-17T18:45:23.1673809Z --- Parse Warning: 109 / 116 --- 2025-03-17T18:45:23.1674839Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=create_selective_checkpoint_contexts in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/checkpoint.py line=1334. 2025-03-17T18:45:23.1675107Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1675204Z 2025-03-17T18:45:23.1675451Z Helper to avoid recomputing certain ops during activation checkpointing. 2025-03-17T18:45:23.1675547Z 2025-03-17T18:45:23.1675766Z Use this with `torch.utils.checkpoint.checkpoint` to control which 2025-03-17T18:45:23.1675943Z operations are recomputed during the backward pass. 2025-03-17T18:45:23.1676055Z 2025-03-17T18:45:23.1676152Z Args: 2025-03-17T18:45:23.1676303Z policy_fn_or_list (Callable or List): 2025-03-17T18:45:23.1676483Z - If a policy function is provided, it should accept a 2025-03-17T18:45:23.1676730Z :class:`SelectiveCheckpointContext`, the :class:`OpOverload`, args and 2025-03-17T18:45:23.1676958Z kwargs to the op, and return a :class:`CheckpointPolicy` enum value 2025-03-17T18:45:23.1677204Z indicating whether the execution of the op should be recomputed or not. 2025-03-17T18:45:23.1677423Z - If a list of operations is provided, it is equivalent to a policy 2025-03-17T18:45:23.1677638Z returning `CheckpointPolicy.MUST_SAVE` for the specified 2025-03-17T18:45:23.1677872Z operations and `CheckpointPolicy.PREFER_RECOMPUTE` for all other 2025-03-17T18:45:23.1677967Z operations. 2025-03-17T18:45:23.1678196Z allow_cache_entry_mutation (bool, optional): By default, an error is 2025-03-17T18:45:23.1678417Z raised if any tensors cached by selective activation checkpoint are 2025-03-17T18:45:23.1678644Z mutated in order to ensure correctness. If set to `True`, this check 2025-03-17T18:45:23.1678740Z is disabled. 2025-03-17T18:45:23.1678839Z Returns: 2025-03-17T18:45:23.1678958Z A tuple of two context managers. 2025-03-17T18:45:23.1679057Z 2025-03-17T18:45:23.1679146Z Example: 2025-03-17T18:45:23.1679259Z >>> # xdoctest: +REQUIRES(LINUX) 2025-03-17T18:45:23.1679368Z >>> import functools 2025-03-17T18:45:23.1679455Z >>> 2025-03-17T18:45:23.1679598Z >>> x = torch.rand(10, 10, requires_grad=True) 2025-03-17T18:45:23.1679728Z >>> y = torch.rand(10, 10, requires_grad=True) 2025-03-17T18:45:23.1679829Z >>> 2025-03-17T18:45:23.1679929Z >>> ops_to_save = [ 2025-03-17T18:45:23.1680059Z >>> torch.ops.aten.mm.default, 2025-03-17T18:45:23.1680150Z >>> ] 2025-03-17T18:45:23.1680248Z >>> 2025-03-17T18:45:23.1680384Z >>> def policy_fn(ctx, op, *args, **kwargs): 2025-03-17T18:45:23.1680498Z >>> if op in ops_to_save: 2025-03-17T18:45:23.1680630Z >>> return CheckpointPolicy.MUST_SAVE 2025-03-17T18:45:23.1680732Z >>> else: 2025-03-17T18:45:23.1680904Z >>> return CheckpointPolicy.PREFER_RECOMPUTE 2025-03-17T18:45:23.1681003Z >>> 2025-03-17T18:45:23.1681284Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, policy_fn) 2025-03-17T18:45:23.1681381Z >>> 2025-03-17T18:45:23.1681488Z >>> # or equivalently 2025-03-17T18:45:23.1681777Z >>> context_fn = functools.partial(create_selective_checkpoint_contexts, ops_to_save) 2025-03-17T18:45:23.1681863Z >>> 2025-03-17T18:45:23.1681970Z >>> def fn(x, y): 2025-03-17T18:45:23.1682177Z >>> return torch.sigmoid(torch.matmul(torch.matmul(x, y), y)) * y 2025-03-17T18:45:23.1682275Z >>> 2025-03-17T18:45:23.1682425Z >>> out = torch.utils.checkpoint.checkpoint( 2025-03-17T18:45:23.1682530Z >>> fn, x, y, 2025-03-17T18:45:23.1682637Z >>> use_reentrant=False, 2025-03-17T18:45:23.1682758Z >>> context_fn=context_fn, 2025-03-17T18:45:23.1682844Z >>> ) 2025-03-17T18:45:23.1682939Z 2025-03-17T18:45:23.1683197Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1683283Z 2025-03-17T18:45:23.1683394Z warnings.warn(msg) 2025-03-17T18:45:23.1683480Z 2025-03-17T18:45:23.1683682Z --- Parse Warning: 110 / 116 --- 2025-03-17T18:45:23.1684615Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=CppExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1064. 2025-03-17T18:45:23.1684921Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1685008Z 2025-03-17T18:45:23.1685196Z Create a :class:`setuptools.Extension` for C++. 2025-03-17T18:45:23.1685284Z 2025-03-17T18:45:23.1685544Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:23.1685777Z bare minimum (but often sufficient) arguments to build a C++ extension. 2025-03-17T18:45:23.1685869Z 2025-03-17T18:45:23.1686084Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:23.1686247Z constructor. Full list arguments can be found at 2025-03-17T18:45:23.1686584Z https://setuptools.pypa.io/en/latest/userguide/ext_modules.html#extension-api-reference 2025-03-17T18:45:23.1686705Z 2025-03-17T18:45:23.1686802Z .. warning:: 2025-03-17T18:45:23.1687047Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:23.1687263Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:23.1687480Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:23.1687720Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:23.1687951Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:23.1688173Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:23.1688317Z register the ops through the dispatcher. 2025-03-17T18:45:23.1688403Z 2025-03-17T18:45:23.1688644Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:23.1688857Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:23.1689083Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:23.1689303Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:23.1689534Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:23.1689759Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:23.1689880Z CPython version of choice. 2025-03-17T18:45:23.1689967Z 2025-03-17T18:45:23.1690069Z Example: 2025-03-17T18:45:23.1690174Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1690360Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:23.1690481Z >>> from setuptools import setup 2025-03-17T18:45:23.1690715Z >>> from torch.utils.cpp_extension import BuildExtension, CppExtension 2025-03-17T18:45:23.1690809Z >>> setup( 2025-03-17T18:45:23.1690925Z ... name='extension', 2025-03-17T18:45:23.1691024Z ... ext_modules=[ 2025-03-17T18:45:23.1691133Z ... CppExtension( 2025-03-17T18:45:23.1691244Z ... name='extension', 2025-03-17T18:45:23.1691375Z ... sources=['extension.cpp'], 2025-03-17T18:45:23.1691500Z ... extra_compile_args=['-g'], 2025-03-17T18:45:23.1691665Z ... extra_link_args=['-Wl,--no-as-needed', '-lm']) 2025-03-17T18:45:23.1691757Z ... ], 2025-03-17T18:45:23.1691861Z ... cmdclass={ 2025-03-17T18:45:23.1691981Z ... 'build_ext': BuildExtension 2025-03-17T18:45:23.1692079Z ... }) 2025-03-17T18:45:23.1692165Z 2025-03-17T18:45:23.1692434Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1692516Z 2025-03-17T18:45:23.1692631Z warnings.warn(msg) 2025-03-17T18:45:23.1692717Z 2025-03-17T18:45:23.1692910Z --- Parse Warning: 111 / 116 --- 2025-03-17T18:45:23.1693845Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=CUDAExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1134. 2025-03-17T18:45:23.1694152Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1694266Z 2025-03-17T18:45:23.1694444Z Create a :class:`setuptools.Extension` for CUDA/C++. 2025-03-17T18:45:23.1694530Z 2025-03-17T18:45:23.1694779Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:23.1694998Z bare minimum (but often sufficient) arguments to build a CUDA/C++ 2025-03-17T18:45:23.1695235Z extension. This includes the CUDA include path, library path and runtime 2025-03-17T18:45:23.1695338Z library. 2025-03-17T18:45:23.1695422Z 2025-03-17T18:45:23.1695648Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:23.1695831Z constructor. Full list arguments can be found at 2025-03-17T18:45:23.1696176Z https://setuptools.pypa.io/en/latest/userguide/ext_modules.html#extension-api-reference 2025-03-17T18:45:23.1696265Z 2025-03-17T18:45:23.1696373Z .. warning:: 2025-03-17T18:45:23.1696609Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:23.1696835Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:23.1697038Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:23.1697286Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:23.1697508Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:23.1697737Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:23.1697871Z register the ops through the dispatcher. 2025-03-17T18:45:23.1697970Z 2025-03-17T18:45:23.1698200Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:23.1698420Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:23.1698636Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:23.1698870Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:23.1699089Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:23.1699319Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:23.1699452Z CPython version of choice. 2025-03-17T18:45:23.1699555Z 2025-03-17T18:45:23.1699648Z Example: 2025-03-17T18:45:23.1699764Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1699918Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:23.1700053Z >>> from setuptools import setup 2025-03-17T18:45:23.1700286Z >>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension 2025-03-17T18:45:23.1700390Z >>> setup( 2025-03-17T18:45:23.1700502Z ... name='cuda_extension', 2025-03-17T18:45:23.1700615Z ... ext_modules=[ 2025-03-17T18:45:23.1700724Z ... CUDAExtension( 2025-03-17T18:45:23.1700856Z ... name='cuda_extension', 2025-03-17T18:45:23.1701025Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:23.1701173Z ... extra_compile_args={'cxx': ['-g'], 2025-03-17T18:45:23.1701300Z ... 'nvcc': ['-O2']}, 2025-03-17T18:45:23.1701472Z ... extra_link_args=['-Wl,--no-as-needed', '-lcuda']) 2025-03-17T18:45:23.1701563Z ... ], 2025-03-17T18:45:23.1701676Z ... cmdclass={ 2025-03-17T18:45:23.1701801Z ... 'build_ext': BuildExtension 2025-03-17T18:45:23.1701908Z ... }) 2025-03-17T18:45:23.1701997Z 2025-03-17T18:45:23.1702117Z Compute capabilities: 2025-03-17T18:45:23.1702204Z 2025-03-17T18:45:23.1702526Z By default the extension will be compiled to run on all archs of the cards visible during the 2025-03-17T18:45:23.1702855Z building process of the extension, plus PTX. If down the road a new card is installed the 2025-03-17T18:45:23.1703206Z extension may need to be recompiled. If a visible card has a compute capability (CC) that's 2025-03-17T18:45:23.1703518Z newer than the newest version for which your nvcc can build fully-compiled binaries, PyTorch 2025-03-17T18:45:23.1703829Z will make nvcc fall back to building kernels with the newest version of PTX your nvcc does 2025-03-17T18:45:23.1703958Z support (see below for details on PTX). 2025-03-17T18:45:23.1704056Z 2025-03-17T18:45:23.1704381Z You can override the default behavior using `TORCH_CUDA_ARCH_LIST` to explicitly specify which 2025-03-17T18:45:23.1704545Z CCs you want the extension to support: 2025-03-17T18:45:23.1704632Z 2025-03-17T18:45:23.1704839Z ``TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py`` 2025-03-17T18:45:23.1705081Z ``TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python build_my_extension.py`` 2025-03-17T18:45:23.1705183Z 2025-03-17T18:45:23.1705517Z The +PTX option causes extension kernel binaries to include PTX instructions for the specified 2025-03-17T18:45:23.1705854Z CC. PTX is an intermediate representation that allows kernels to runtime-compile for any CC >= 2025-03-17T18:45:23.1706167Z the specified CC (for example, 8.6+PTX generates PTX that can runtime-compile for any GPU with 2025-03-17T18:45:23.1706563Z CC >= 8.6). This improves your binary's forward compatibility. However, relying on older PTX to 2025-03-17T18:45:23.1706899Z provide forward compat by runtime-compiling for newer CCs can modestly reduce performance on 2025-03-17T18:45:23.1707197Z those newer CCs. If you know exact CC(s) of the GPUs you want to target, you're always better 2025-03-17T18:45:23.1707530Z off specifying them individually. For example, if you want your extension to run on 8.0 and 8.6, 2025-03-17T18:45:23.1707865Z "8.0+PTX" would work functionally because it includes PTX that can runtime-compile for 8.6, but 2025-03-17T18:45:23.1707970Z "8.0 8.6" would be better. 2025-03-17T18:45:23.1708068Z 2025-03-17T18:45:23.1708378Z Note that while it's possible to include all supported archs, the more archs get included the 2025-03-17T18:45:23.1708731Z slower the building process will be, as it will build a separate kernel image for each arch. 2025-03-17T18:45:23.1708818Z 2025-03-17T18:45:23.1709170Z Note that CUDA-11.5 nvcc will hit internal compiler error while parsing torch/extension.h on Windows. 2025-03-17T18:45:23.1709391Z To workaround the issue, move python binding logic to pure C++ file. 2025-03-17T18:45:23.1709492Z 2025-03-17T18:45:23.1709585Z Example use: 2025-03-17T18:45:23.1709701Z #include 2025-03-17T18:45:23.1709859Z at::Tensor SigmoidAlphaBlendForwardCuda(....) 2025-03-17T18:45:23.1709958Z 2025-03-17T18:45:23.1710051Z Instead of: 2025-03-17T18:45:23.1710165Z #include 2025-03-17T18:45:23.1710342Z torch::Tensor SigmoidAlphaBlendForwardCuda(...) 2025-03-17T18:45:23.1710429Z 2025-03-17T18:45:23.1710724Z Currently open issue for nvcc bug: https://github.com/pytorch/pytorch/issues/69460 2025-03-17T18:45:23.1711248Z Complete workaround code example: https://github.com/facebookresearch/pytorch3d/commit/cb170ac024a949f1f9614ffe6af1c38d972f7d48 2025-03-17T18:45:23.1711346Z 2025-03-17T18:45:23.1711464Z Relocatable device code linking: 2025-03-17T18:45:23.1711563Z 2025-03-17T18:45:23.1711850Z If you want to reference device symbols across compilation units (across object files), 2025-03-17T18:45:23.1712129Z the object files need to be built with `relocatable device code` (-rdc=true or -dc). 2025-03-17T18:45:23.1712505Z An exception to this rule is "dynamic parallelism" (nested kernel launches) which is not used a lot anymore. 2025-03-17T18:45:23.1712859Z `Relocatable device code` is less optimized so it needs to be used only on object files that need it. 2025-03-17T18:45:23.1713241Z Using `-dlto` (Device Link Time Optimization) at the device code compilation step and `dlink` step 2025-03-17T18:45:23.1713441Z helps reduce the protentional perf degradation of `-rdc`. 2025-03-17T18:45:23.1713616Z Note that it needs to be used at both steps to be useful. 2025-03-17T18:45:23.1713711Z 2025-03-17T18:45:23.1714092Z If you have `rdc` objects you need to have an extra `-dlink` (device linking) step before the CPU symbol linking step. 2025-03-17T18:45:23.1714280Z There is also a case where `-dlink` is used without `-rdc`: 2025-03-17T18:45:23.1714545Z when an extension is linked against a static lib containing rdc-compiled objects 2025-03-17T18:45:23.1714804Z like the [NVSHMEM library](https://developer.nvidia.com/nvshmem). 2025-03-17T18:45:23.1714890Z 2025-03-17T18:45:23.1715113Z Note: Ninja is required to build a CUDA Extension with RDC linking. 2025-03-17T18:45:23.1715202Z 2025-03-17T18:45:23.1715302Z Example: 2025-03-17T18:45:23.1715408Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1715573Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:23.1715676Z >>> CUDAExtension( 2025-03-17T18:45:23.1715797Z ... name='cuda_extension', 2025-03-17T18:45:23.1715969Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:23.1716081Z ... dlink=True, 2025-03-17T18:45:23.1716208Z ... dlink_libraries=["dlink_lib"], 2025-03-17T18:45:23.1716350Z ... extra_compile_args={'cxx': ['-g'], 2025-03-17T18:45:23.1716485Z ... 'nvcc': ['-O2', '-rdc=true']}) 2025-03-17T18:45:23.1716587Z 2025-03-17T18:45:23.1716848Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1716946Z 2025-03-17T18:45:23.1717050Z warnings.warn(msg) 2025-03-17T18:45:23.1717148Z 2025-03-17T18:45:23.1717364Z --- Parse Warning: 112 / 116 --- 2025-03-17T18:45:23.1718317Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=SyclExtension in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1325. 2025-03-17T18:45:23.1718618Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1718719Z 2025-03-17T18:45:23.1718891Z Creates a :class:`setuptools.Extension` for SYCL/C++. 2025-03-17T18:45:23.1718992Z 2025-03-17T18:45:23.1719241Z Convenience method that creates a :class:`setuptools.Extension` with the 2025-03-17T18:45:23.1719465Z bare minimum (but often sufficient) arguments to build a SYCL/C++ 2025-03-17T18:45:23.1719560Z extension. 2025-03-17T18:45:23.1719661Z 2025-03-17T18:45:23.1719879Z All arguments are forwarded to the :class:`setuptools.Extension` 2025-03-17T18:45:23.1719985Z constructor. 2025-03-17T18:45:23.1720074Z 2025-03-17T18:45:23.1720178Z .. warning:: 2025-03-17T18:45:23.1720418Z The PyTorch python API (as provided in libtorch_python) cannot be built 2025-03-17T18:45:23.1720651Z with the flag ``py_limited_api=True``. When this flag is passed, it is 2025-03-17T18:45:23.1720856Z the user's responsibility in their library to not use APIs from 2025-03-17T18:45:23.1721107Z libtorch_python (in particular pytorch/python bindings) and to only use 2025-03-17T18:45:23.1721333Z APIs from libtorch (aten objects, operators and the dispatcher). For 2025-03-17T18:45:23.1721562Z example, to give access to custom ops from python, the library should 2025-03-17T18:45:23.1721699Z register the ops through the dispatcher. 2025-03-17T18:45:23.1721795Z 2025-03-17T18:45:23.1722024Z Contrary to CPython setuptools, who does not define -DPy_LIMITED_API 2025-03-17T18:45:23.1722249Z as a compile flag when py_limited_api is specified as an option for 2025-03-17T18:45:23.1722490Z the "bdist_wheel" command in ``setup``, PyTorch does! We will specify 2025-03-17T18:45:23.1722747Z -DPy_LIMITED_API=min_supported_cpython to best enforce consistency, 2025-03-17T18:45:23.1722970Z safety, and sanity in order to encourage best practices. To target a 2025-03-17T18:45:23.1723202Z different version, set min_supported_cpython to the hexcode of the 2025-03-17T18:45:23.1723313Z CPython version of choice. 2025-03-17T18:45:23.1723413Z 2025-03-17T18:45:23.1723504Z Example: 2025-03-17T18:45:23.1723607Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1723771Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:23.1724028Z >>> from torch.utils.cpp_extension import BuildExtension, SyclExtension 2025-03-17T18:45:23.1724126Z >>> setup( 2025-03-17T18:45:23.1724238Z ... name='xpu_extension', 2025-03-17T18:45:23.1724348Z ... ext_modules=[ 2025-03-17T18:45:23.1724454Z ... SyclExtension( 2025-03-17T18:45:23.1724583Z ... name='xpu_extension', 2025-03-17T18:45:23.1724758Z ... sources=['extension.cpp', 'extension_kernel.cpp'], 2025-03-17T18:45:23.1724952Z ... extra_compile_args={'cxx': ['-g', '-std=c++20', '-fPIC']}) 2025-03-17T18:45:23.1725041Z ... ], 2025-03-17T18:45:23.1725152Z ... cmdclass={ 2025-03-17T18:45:23.1725274Z ... 'build_ext': BuildExtension 2025-03-17T18:45:23.1725373Z ... }) 2025-03-17T18:45:23.1725458Z 2025-03-17T18:45:23.1725777Z By default the extension will be compiled to run on all archs of the cards visible during the 2025-03-17T18:45:23.1726042Z building process of the extension. If down the road a new card is installed the 2025-03-17T18:45:23.1726314Z extension may need to be recompiled. You can override the default behavior using 2025-03-17T18:45:23.1726629Z `TORCH_XPU_ARCH_LIST` to explicitly specify which device architectures you want the extension 2025-03-17T18:45:23.1726740Z to support: 2025-03-17T18:45:23.1726830Z 2025-03-17T18:45:23.1727050Z ``TORCH_XPU_ARCH_LIST="pvc,xe-lpg" python build_my_extension.py`` 2025-03-17T18:45:23.1727136Z 2025-03-17T18:45:23.1727457Z Note that while it's possible to include all supported archs, the more archs get included the 2025-03-17T18:45:23.1727784Z slower the building process will be, as it will build a separate kernel image for each arch. 2025-03-17T18:45:23.1727883Z 2025-03-17T18:45:23.1728030Z Note: Ninja is required to build SyclExtension. 2025-03-17T18:45:23.1728125Z 2025-03-17T18:45:23.1728387Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1728481Z 2025-03-17T18:45:23.1728586Z warnings.warn(msg) 2025-03-17T18:45:23.1728682Z 2025-03-17T18:45:23.1728881Z --- Parse Warning: 113 / 116 --- 2025-03-17T18:45:23.1729781Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1502. 2025-03-17T18:45:23.1730049Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1730147Z 2025-03-17T18:45:23.1730301Z Load a PyTorch C++ extension just-in-time (JIT). 2025-03-17T18:45:23.1730396Z 2025-03-17T18:45:23.1730610Z To load an extension, a Ninja build file is emitted, which is used to 2025-03-17T18:45:23.1730836Z compile the given sources into a dynamic library. This library is 2025-03-17T18:45:23.1731067Z subsequently loaded into the current Python process as a module and 2025-03-17T18:45:23.1731211Z returned from this function, ready for use. 2025-03-17T18:45:23.1731297Z 2025-03-17T18:45:23.1731522Z By default, the directory to which the build file is emitted and the 2025-03-17T18:45:23.1731793Z resulting library compiled to is ``/torch_extensions/``, where 2025-03-17T18:45:23.1732040Z ```` is the temporary folder on the current platform and ```` 2025-03-17T18:45:23.1732265Z the name of the extension. This location can be overridden in two ways. 2025-03-17T18:45:23.1732492Z First, if the ``TORCH_EXTENSIONS_DIR`` environment variable is set, it 2025-03-17T18:45:23.1732721Z replaces ``/torch_extensions`` and all extensions will be compiled 2025-03-17T18:45:23.1732960Z into subfolders of this directory. Second, if the ``build_directory`` 2025-03-17T18:45:23.1733203Z argument to this function is supplied, it overrides the entire path, i.e. 2025-03-17T18:45:23.1733415Z the library will be compiled into that folder directly. 2025-03-17T18:45:23.1733502Z 2025-03-17T18:45:23.1733731Z To compile the sources, the default system compiler (``c++``) is used, 2025-03-17T18:45:23.1733981Z which can be overridden by setting the ``CXX`` environment variable. To pass 2025-03-17T18:45:23.1734231Z additional arguments to the compilation process, ``extra_cflags`` or 2025-03-17T18:45:23.1734463Z ``extra_ldflags`` can be provided. For example, to compile your extension 2025-03-17T18:45:23.1734695Z with optimizations, pass ``extra_cflags=['-O3']``. You can also use 2025-03-17T18:45:23.1734859Z ``extra_cflags`` to pass further include directories. 2025-03-17T18:45:23.1734959Z 2025-03-17T18:45:23.1735206Z CUDA support with mixed compilation is provided. Simply pass CUDA source 2025-03-17T18:45:23.1735413Z files (``.cu`` or ``.cuh``) along with other sources. Such files will be 2025-03-17T18:45:23.1735670Z detected and compiled with nvcc rather than the C++ compiler. This includes 2025-03-17T18:45:23.1735901Z passing the CUDA lib64 directory as a library directory, and linking 2025-03-17T18:45:23.1736061Z ``cudart``. You can pass additional flags to nvcc via 2025-03-17T18:45:23.1736288Z ``extra_cuda_cflags``, just like with ``extra_cflags`` for C++. Various 2025-03-17T18:45:23.1736535Z heuristics for finding the CUDA install directory are used, which usually 2025-03-17T18:45:23.1736991Z work fine. If not, setting the ``CUDA_HOME`` environment variable is the 2025-03-17T18:45:23.1737091Z safest option. 2025-03-17T18:45:23.1737191Z 2025-03-17T18:45:23.1737495Z SYCL support with mixed compilation is provided. Simply pass SYCL source 2025-03-17T18:45:23.1737716Z files (``.sycl``) along with other sources. Such files will be detected 2025-03-17T18:45:23.1737943Z and compiled with SYCL compiler (such as Intel DPC++ Compiler) rather 2025-03-17T18:45:23.1738171Z than the C++ compiler. You can pass additional flags to SYCL compiler 2025-03-17T18:45:23.1738376Z via ``extra_sycl_cflags``, just like with ``extra_cflags`` for C++. 2025-03-17T18:45:23.1738608Z SYCL compiler is expected to be found via system PATH environment 2025-03-17T18:45:23.1738702Z variable. 2025-03-17T18:45:23.1738803Z 2025-03-17T18:45:23.1738894Z Args: 2025-03-17T18:45:23.1739115Z name: The name of the extension to build. This MUST be the same as the 2025-03-17T18:45:23.1739247Z name of the pybind11 module! 2025-03-17T18:45:23.1739458Z sources: A list of relative or absolute paths to C++ source files. 2025-03-17T18:45:23.1739702Z extra_cflags: optional list of compiler flags to forward to the build. 2025-03-17T18:45:23.1739931Z extra_cuda_cflags: optional list of compiler flags to forward to nvcc 2025-03-17T18:45:23.1740054Z when building CUDA sources. 2025-03-17T18:45:23.1740279Z extra_sycl_cflags: optional list of compiler flags to forward to SYCL 2025-03-17T18:45:23.1740418Z compiler when building SYCL sources. 2025-03-17T18:45:23.1740641Z extra_ldflags: optional list of linker flags to forward to the build. 2025-03-17T18:45:23.1740877Z extra_include_paths: optional list of include directories to forward 2025-03-17T18:45:23.1741024Z to the build. 2025-03-17T18:45:23.1741280Z build_directory: optional path to use as build workspace. 2025-03-17T18:45:23.1741469Z verbose: If ``True``, turns on verbose logging of load steps. 2025-03-17T18:45:23.1741708Z with_cuda: Determines whether CUDA headers and libraries are added to 2025-03-17T18:45:23.1741876Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:23.1742097Z automatically determined based on the existence of ``.cu`` or 2025-03-17T18:45:23.1742271Z ``.cuh`` in ``sources``. Set it to `True`` to force CUDA headers 2025-03-17T18:45:23.1742398Z and libraries to be included. 2025-03-17T18:45:23.1742662Z with_sycl: Determines whether SYCL headers and libraries are added to 2025-03-17T18:45:23.1742838Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:23.1743055Z automatically determined based on the existence of ``.sycl`` in 2025-03-17T18:45:23.1743238Z ``sources``. Set it to `True`` to force SYCL headers and 2025-03-17T18:45:23.1743354Z libraries to be included. 2025-03-17T18:45:23.1743575Z is_python_module: If ``True`` (default), imports the produced shared 2025-03-17T18:45:23.1743772Z library as a Python module. If ``False``, behavior depends on 2025-03-17T18:45:23.1743887Z ``is_standalone``. 2025-03-17T18:45:23.1744100Z is_standalone: If ``False`` (default) loads the constructed extension 2025-03-17T18:45:23.1744314Z into the process as a plain dynamic library. If ``True``, build a 2025-03-17T18:45:23.1744425Z standalone executable. 2025-03-17T18:45:23.1744526Z 2025-03-17T18:45:23.1744617Z Returns: 2025-03-17T18:45:23.1744748Z If ``is_python_module`` is ``True``: 2025-03-17T18:45:23.1744933Z Returns the loaded PyTorch extension as a Python module. 2025-03-17T18:45:23.1745031Z 2025-03-17T18:45:23.1745241Z If ``is_python_module`` is ``False`` and ``is_standalone`` is ``False``: 2025-03-17T18:45:23.1745472Z Returns nothing. (The shared library is loaded into the process as 2025-03-17T18:45:23.1745575Z a side effect.) 2025-03-17T18:45:23.1745671Z 2025-03-17T18:45:23.1745787Z If ``is_standalone`` is ``True``. 2025-03-17T18:45:23.1746032Z Return the path to the executable. (On Windows, TORCH_LIB_PATH is 2025-03-17T18:45:23.1746214Z added to the PATH environment variable as a side effect.) 2025-03-17T18:45:23.1746310Z 2025-03-17T18:45:23.1746402Z Example: 2025-03-17T18:45:23.1746599Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1746749Z >>> from torch.utils.cpp_extension import load 2025-03-17T18:45:23.1746860Z >>> module = load( 2025-03-17T18:45:23.1746967Z ... name='extension', 2025-03-17T18:45:23.1747144Z ... sources=['extension.cpp', 'extension_kernel.cu'], 2025-03-17T18:45:23.1747256Z ... extra_cflags=['-O2'], 2025-03-17T18:45:23.1747366Z ... verbose=True) 2025-03-17T18:45:23.1747455Z 2025-03-17T18:45:23.1747727Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1747815Z 2025-03-17T18:45:23.1747932Z warnings.warn(msg) 2025-03-17T18:45:23.1748021Z 2025-03-17T18:45:23.1748259Z --- Parse Warning: 114 / 116 --- 2025-03-17T18:45:23.1749179Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=load_inline in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/cpp_extension.py line=1811. 2025-03-17T18:45:23.1749466Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1749554Z 2025-03-17T18:45:23.1749786Z Load a PyTorch C++ extension just-in-time (JIT) from string sources. 2025-03-17T18:45:23.1749874Z 2025-03-17T18:45:23.1750124Z This function behaves exactly like :func:`load`, but takes its sources as 2025-03-17T18:45:23.1750421Z strings rather than filenames. These strings are stored to files in the 2025-03-17T18:45:23.1750653Z build directory, after which the behavior of :func:`load_inline` is 2025-03-17T18:45:23.1750763Z identical to :func:`load`. 2025-03-17T18:45:23.1750858Z 2025-03-17T18:45:23.1750949Z See `the 2025-03-17T18:45:23.1751284Z tests `_ 2025-03-17T18:45:23.1751428Z for good examples of using this function. 2025-03-17T18:45:23.1751513Z 2025-03-17T18:45:23.1751768Z Sources may omit two required parts of a typical non-inline C++ extension: 2025-03-17T18:45:23.1752042Z the necessary header includes, as well as the (pybind11) binding code. More 2025-03-17T18:45:23.1752300Z precisely, strings passed to ``cpp_sources`` are first concatenated into a 2025-03-17T18:45:23.1752493Z single ``.cpp`` file. This file is then prepended with ``#include 2025-03-17T18:45:23.1752611Z ``. 2025-03-17T18:45:23.1752698Z 2025-03-17T18:45:23.1752942Z Furthermore, if the ``functions`` argument is supplied, bindings will be 2025-03-17T18:45:23.1753184Z automatically generated for each function specified. ``functions`` can 2025-03-17T18:45:23.1753425Z either be a list of function names, or a dictionary mapping from function 2025-03-17T18:45:23.1753655Z names to docstrings. If a list is given, the name of each function is used 2025-03-17T18:45:23.1753765Z as its docstring. 2025-03-17T18:45:23.1753850Z 2025-03-17T18:45:23.1754082Z The sources in ``cuda_sources`` are concatenated into a separate ``.cu`` 2025-03-17T18:45:23.1754261Z file and prepended with ``torch/types.h``, ``cuda.h`` and 2025-03-17T18:45:23.1754487Z ``cuda_runtime.h`` includes. The ``.cpp`` and ``.cu`` files are compiled 2025-03-17T18:45:23.1754717Z separately, but ultimately linked into a single library. Note that no 2025-03-17T18:45:23.1754971Z bindings are generated for functions in ``cuda_sources`` per se. To bind 2025-03-17T18:45:23.1755195Z to a CUDA kernel, you must create a C++ function that calls it, and either 2025-03-17T18:45:23.1755425Z declare or define this C++ function in one of the ``cpp_sources`` (and 2025-03-17T18:45:23.1755542Z include its name in ``functions``). 2025-03-17T18:45:23.1755662Z 2025-03-17T18:45:23.1755890Z The sources in ``sycl_sources`` are concatenated into a separate ``.sycl`` 2025-03-17T18:45:23.1756124Z file and prepended with ``torch/types.h``, ``sycl/sycl.hpp`` includes. 2025-03-17T18:45:23.1756325Z The ``.cpp`` and ``.sycl`` files are compiled separately, but ultimately 2025-03-17T18:45:23.1756563Z linked into a single library. Note that no bindings are generated for 2025-03-17T18:45:23.1756781Z functions in ``sycl_sources`` per se. To bind to a SYCL kernel, you must 2025-03-17T18:45:23.1757006Z create a C++ function that calls it, and either declare or define this 2025-03-17T18:45:23.1757203Z C++ function in one of the ``cpp_sources`` (and include its name 2025-03-17T18:45:23.1757315Z in ``functions``). 2025-03-17T18:45:23.1757400Z 2025-03-17T18:45:23.1757601Z See :func:`load` for a description of arguments omitted below. 2025-03-17T18:45:23.1757686Z 2025-03-17T18:45:23.1757791Z Args: 2025-03-17T18:45:23.1758016Z cpp_sources: A string, or list of strings, containing C++ source code. 2025-03-17T18:45:23.1758255Z cuda_sources: A string, or list of strings, containing CUDA source code. 2025-03-17T18:45:23.1758480Z sycl_sources: A string, or list of strings, containing SYCL source code. 2025-03-17T18:45:23.1758709Z functions: A list of function names for which to generate function 2025-03-17T18:45:23.1758924Z bindings. If a dictionary is given, it should map function names to 2025-03-17T18:45:23.1759126Z docstrings (which are otherwise just the function names). 2025-03-17T18:45:23.1759405Z with_cuda: Determines whether CUDA headers and libraries are added to 2025-03-17T18:45:23.1759582Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:23.1759792Z automatically determined based on whether ``cuda_sources`` is 2025-03-17T18:45:23.1759965Z provided. Set it to ``True`` to force CUDA headers 2025-03-17T18:45:23.1760080Z and libraries to be included. 2025-03-17T18:45:23.1760319Z with_sycl: Determines whether SYCL headers and libraries are added to 2025-03-17T18:45:23.1760480Z the build. If set to ``None`` (default), this value is 2025-03-17T18:45:23.1760729Z automatically determined based on whether ``sycl_sources`` is 2025-03-17T18:45:23.1760888Z provided. Set it to ``True`` to force SYCL headers 2025-03-17T18:45:23.1761019Z and libraries to be included. 2025-03-17T18:45:23.1761236Z with_pytorch_error_handling: Determines whether pytorch error and 2025-03-17T18:45:23.1761455Z warning macros are handled by pytorch instead of pybind. To do 2025-03-17T18:45:23.1761677Z this, each function ``foo`` is called via an intermediary ``_safe_foo`` 2025-03-17T18:45:23.1761894Z function. This redirection might cause issues in obscure cases 2025-03-17T18:45:23.1762086Z of cpp. This flag should be set to ``False`` when this redirect 2025-03-17T18:45:23.1762197Z causes issues. 2025-03-17T18:45:23.1762283Z 2025-03-17T18:45:23.1762384Z Example: 2025-03-17T18:45:23.1762538Z >>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CPP_EXT) 2025-03-17T18:45:23.1762712Z >>> from torch.utils.cpp_extension import load_inline 2025-03-17T18:45:23.1762810Z >>> source = """ 2025-03-17T18:45:23.1762971Z at::Tensor sin_add(at::Tensor x, at::Tensor y) { 2025-03-17T18:45:23.1763080Z return x.sin() + y.sin(); 2025-03-17T18:45:23.1763178Z } 2025-03-17T18:45:23.1763267Z """ 2025-03-17T18:45:23.1763421Z >>> module = load_inline(name='inline_extension', 2025-03-17T18:45:23.1763548Z ... cpp_sources=[source], 2025-03-17T18:45:23.1763682Z ... functions=['sin_add']) 2025-03-17T18:45:23.1763769Z 2025-03-17T18:45:23.1763867Z .. note:: 2025-03-17T18:45:23.1764135Z Since load_inline will just-in-time compile the source code, please ensure 2025-03-17T18:45:23.1764383Z that you have the right toolchains installed in the runtime. For example, 2025-03-17T18:45:23.1764612Z when loading C++, make sure a C++ compiler is available. If you're loading 2025-03-17T18:45:23.1764878Z a CUDA extension, you will need to additionally install the corresponding CUDA 2025-03-17T18:45:23.1765131Z toolkit (nvcc and any other dependencies your code has). Compiling toolchains 2025-03-17T18:45:23.1765388Z are not included when you install torch and must be additionally installed. 2025-03-17T18:45:23.1765473Z 2025-03-17T18:45:23.1765743Z During compiling, by default, the Ninja backend uses #CPUS + 2 workers to build 2025-03-17T18:45:23.1765967Z the extension. This may use up too many resources on some systems. One 2025-03-17T18:45:23.1766207Z can control the number of workers by setting the `MAX_JOBS` environment 2025-03-17T18:45:23.1766333Z variable to a non-negative number. 2025-03-17T18:45:23.1766434Z 2025-03-17T18:45:23.1766692Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1766790Z 2025-03-17T18:45:23.1766891Z warnings.warn(msg) 2025-03-17T18:45:23.1766978Z 2025-03-17T18:45:23.1767192Z --- Parse Warning: 115 / 116 --- 2025-03-17T18:45:23.1768188Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=ThroughputBenchmark in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/throughput_benchmark.py line=61. 2025-03-17T18:45:23.1768522Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1768619Z 2025-03-17T18:45:23.1768916Z This class is a wrapper around a c++ component throughput_benchmark::ThroughputBenchmark. 2025-03-17T18:45:23.1769015Z 2025-03-17T18:45:23.1769319Z This wrapper on the throughput_benchmark::ThroughputBenchmark component is responsible 2025-03-17T18:45:23.1769584Z for executing a PyTorch module (nn.Module or ScriptModule) under an inference 2025-03-17T18:45:23.1769819Z server like load. It can emulate multiple calling threads to a single module 2025-03-17T18:45:23.1770103Z provided. In the future we plan to enhance this component to support inter and 2025-03-17T18:45:23.1770350Z intra-op parallelism as well as multiple models running in a single process. 2025-03-17T18:45:23.1770450Z 2025-03-17T18:45:23.1770704Z Please note that even though nn.Module is supported, it might incur an overhead 2025-03-17T18:45:23.1770947Z from the need to hold GIL every time we execute Python code or pass around 2025-03-17T18:45:23.1771189Z inputs as Python objects. As soon as you have a ScriptModule version of your 2025-03-17T18:45:23.1771442Z model for inference deployment it is better to switch to using it in this 2025-03-17T18:45:23.1771533Z benchmark. 2025-03-17T18:45:23.1771628Z 2025-03-17T18:45:23.1771718Z Example:: 2025-03-17T18:45:23.1771802Z 2025-03-17T18:45:23.1771933Z >>> # xdoctest: +SKIP("undefined vars") 2025-03-17T18:45:23.1772078Z >>> from torch.utils import ThroughputBenchmark 2025-03-17T18:45:23.1772217Z >>> bench = ThroughputBenchmark(my_module) 2025-03-17T18:45:23.1772383Z >>> # Pre-populate benchmark's data set with the inputs 2025-03-17T18:45:23.1772497Z >>> for input in inputs: 2025-03-17T18:45:23.1772720Z ... # Both args and kwargs work, same as any PyTorch Module / ScriptModule 2025-03-17T18:45:23.1772864Z ... bench.add_input(input[0], x2=input[1]) 2025-03-17T18:45:23.1773063Z >>> # Inputs supplied above are randomly used during the execution 2025-03-17T18:45:23.1773180Z >>> stats = bench.benchmark( 2025-03-17T18:45:23.1773287Z ... num_calling_threads=4, 2025-03-17T18:45:23.1773400Z ... num_warmup_iters = 100, 2025-03-17T18:45:23.1773537Z ... num_iters = 1000, 2025-03-17T18:45:23.1773633Z ... ) 2025-03-17T18:45:23.1773816Z >>> print("Avg latency (ms): {}".format(stats.latency_avg_ms)) 2025-03-17T18:45:23.1774004Z >>> print("Number of iterations: {}".format(stats.num_iters)) 2025-03-17T18:45:23.1774092Z 2025-03-17T18:45:23.1774360Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1774444Z 2025-03-17T18:45:23.1774555Z warnings.warn(msg) 2025-03-17T18:45:23.1774639Z 2025-03-17T18:45:23.1774845Z --- Parse Warning: 116 / 116 --- 2025-03-17T18:45:23.1775811Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/xdoctest/core.py:423: UserWarning: Cannot scrape callname=DistributedSampler in modpath=/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/utils/data/distributed.py line=18. 2025-03-17T18:45:23.1776088Z Caused by: DoctestParseError('Failed to parse doctest in _label_docsrc_lines') 2025-03-17T18:45:23.1776297Z Sampler that restricts data loading to a subset of the dataset. 2025-03-17T18:45:23.1776392Z 2025-03-17T18:45:23.1776525Z It is especially useful in conjunction with 2025-03-17T18:45:23.1776784Z :class:`torch.nn.parallel.DistributedDataParallel`. In such a case, each 2025-03-17T18:45:23.1777050Z process can pass a :class:`~torch.utils.data.DistributedSampler` instance as a 2025-03-17T18:45:23.1777293Z :class:`~torch.utils.data.DataLoader` sampler, and load a subset of the 2025-03-17T18:45:23.1777430Z original dataset that is exclusive to it. 2025-03-17T18:45:23.1777551Z 2025-03-17T18:45:23.1777642Z .. note:: 2025-03-17T18:45:23.1777914Z Dataset is assumed to be of constant size and that any instance of it always 2025-03-17T18:45:23.1778059Z returns the same elements in the same order. 2025-03-17T18:45:23.1778151Z 2025-03-17T18:45:23.1778241Z Args: 2025-03-17T18:45:23.1778375Z dataset: Dataset used for sampling. 2025-03-17T18:45:23.1778599Z num_replicas (int, optional): Number of processes participating in 2025-03-17T18:45:23.1778859Z distributed training. By default, :attr:`world_size` is retrieved from the 2025-03-17T18:45:23.1779003Z current distributed group. 2025-03-17T18:45:23.1779254Z rank (int, optional): Rank of the current process within :attr:`num_replicas`. 2025-03-17T18:45:23.1779460Z By default, :attr:`rank` is retrieved from the current distributed 2025-03-17T18:45:23.1779559Z group. 2025-03-17T18:45:23.1779791Z shuffle (bool, optional): If ``True`` (default), sampler will shuffle the 2025-03-17T18:45:23.1779897Z indices. 2025-03-17T18:45:23.1780093Z seed (int, optional): random seed used to shuffle the sampler if 2025-03-17T18:45:23.1780305Z :attr:`shuffle=True`. This number should be identical across all 2025-03-17T18:45:23.1780474Z processes in the distributed group. Default: ``0``. 2025-03-17T18:45:23.1780703Z drop_last (bool, optional): if ``True``, then the sampler will drop the 2025-03-17T18:45:23.1780904Z tail of the data to make it evenly divisible across the number of 2025-03-17T18:45:23.1781117Z replicas. If ``False``, the sampler will add extra indices to make 2025-03-17T18:45:23.1781332Z the data evenly divisible across the replicas. Default: ``False``. 2025-03-17T18:45:23.1781426Z 2025-03-17T18:45:23.1781525Z .. warning:: 2025-03-17T18:45:23.1781732Z In distributed mode, calling the :meth:`set_epoch` method at 2025-03-17T18:45:23.1781992Z the beginning of each epoch **before** creating the :class:`DataLoader` iterator 2025-03-17T18:45:23.1782263Z is necessary to make shuffling work properly across multiple epochs. Otherwise, 2025-03-17T18:45:23.1782417Z the same ordering will be always used. 2025-03-17T18:45:23.1782513Z 2025-03-17T18:45:23.1782606Z Example:: 2025-03-17T18:45:23.1782700Z 2025-03-17T18:45:23.1782804Z >>> # xdoctest: +SKIP 2025-03-17T18:45:23.1783037Z >>> sampler = DistributedSampler(dataset) if is_distributed else None 2025-03-17T18:45:23.1783215Z >>> loader = DataLoader(dataset, shuffle=(sampler is None), 2025-03-17T18:45:23.1783345Z ... sampler=sampler) 2025-03-17T18:45:23.1783482Z >>> for epoch in range(start_epoch, n_epochs): 2025-03-17T18:45:23.1783597Z ... if is_distributed: 2025-03-17T18:45:23.1783719Z ... sampler.set_epoch(epoch) 2025-03-17T18:45:23.1783831Z ... train(loader) 2025-03-17T18:45:23.1783917Z 2025-03-17T18:45:23.1784172Z Original Error: TokenError('unexpected EOF in multi-line statement', (1, 0)) 2025-03-17T18:45:23.1784262Z 2025-03-17T18:45:23.1784363Z warnings.warn(msg) 2025-03-17T18:45:23.1784458Z 2025-03-17T18:45:23.1784572Z  2025-03-17T18:45:23.1784763Z === Found 10 run-time warnings === 2025-03-17T18:45:23.1784946Z --- Runtime Warning: 1 / 10 --- 2025-03-17T18:45:23.1785227Z example = 2025-03-17T18:45:23.1786673Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_tensor.py:1365: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /var/lib/jenkins/workspace/c10/core/TensorImpl.h:1938.) 2025-03-17T18:45:23.1786865Z return super().refine_names(names) 2025-03-17T18:45:23.1786956Z 2025-03-17T18:45:23.1787160Z --- Runtime Warning: 2 / 10 --- 2025-03-17T18:45:23.1787473Z example = 2025-03-17T18:45:23.1788120Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/library.py:288: UserWarning: Warning only once for all operators, other operators may also be overridden. 2025-03-17T18:45:23.1788439Z Overriding a previously registered kernel for the same operator and the same dispatch key 2025-03-17T18:45:23.1788721Z operator: aten::div.Tensor(Tensor self, Tensor other) -> Tensor 2025-03-17T18:45:23.1789037Z registered at /var/lib/jenkins/workspace/build/aten/src/ATen/RegisterSchema.cpp:6 2025-03-17T18:45:23.1789138Z dispatch key: CPU 2025-03-17T18:45:23.1789585Z previous kernel: registered at /var/lib/jenkins/workspace/aten/src/ATen/LegacyBatchingRegistrations.cpp:1079 2025-03-17T18:45:23.1790147Z new kernel: registered at /dev/null:811 (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/core/dispatch/OperatorEntry.cpp:161.) 2025-03-17T18:45:23.1790319Z impl_fn(self.ns, name.split("::")[-1], dispatch_key) 2025-03-17T18:45:23.1790406Z 2025-03-17T18:45:23.1790600Z --- Runtime Warning: 3 / 10 --- 2025-03-17T18:45:23.1790842Z example = 2025-03-17T18:45:23.1792708Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nested/__init__.py:117: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/NestedTensorImpl.cpp:182.) 2025-03-17T18:45:23.1792962Z return torch._nested_tensor_from_tensor_list(ts, dtype, None, device, None) 2025-03-17T18:45:23.1793060Z 2025-03-17T18:45:23.1793240Z --- Runtime Warning: 4 / 10 --- 2025-03-17T18:45:23.1793509Z example = 2025-03-17T18:45:23.1795164Z :1: UserWarning: Sparse CSR tensor support is in beta state. If you miss a functionality in the sparse tensor support, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at /var/lib/jenkins/workspace/aten/src/ATen/SparseCsrTensorImpl.cpp:55.) 2025-03-17T18:45:23.1795266Z 2025-03-17T18:45:23.1795449Z --- Runtime Warning: 5 / 10 --- 2025-03-17T18:45:23.1795772Z example = 2025-03-17T18:45:23.1797310Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/fx/experimental/const_fold.py:264: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer 2025-03-17T18:45:23.1797478Z new_node = root_const_gm.graph.get_attr(in_node.target) 2025-03-17T18:45:23.1797573Z 2025-03-17T18:45:23.1797755Z --- Runtime Warning: 6 / 10 --- 2025-03-17T18:45:23.1798058Z example = 2025-03-17T18:45:23.1799160Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py:382: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-03-17T18:45:23.1799302Z warnings.warn( 2025-03-17T18:45:23.1799410Z 2025-03-17T18:45:23.1799606Z --- Runtime Warning: 7 / 10 --- 2025-03-17T18:45:23.1799938Z example = 2025-03-17T18:45:23.1801044Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/modules/transformer.py:382: UserWarning: enable_nested_tensor is True, but self.use_nested_tensor is False because encoder_layer.self_attn.batch_first was not True(use batch_first for better inference performance) 2025-03-17T18:45:23.1801142Z warnings.warn( 2025-03-17T18:45:23.1801263Z 2025-03-17T18:45:23.1801448Z --- Runtime Warning: 8 / 10 --- 2025-03-17T18:45:23.1801739Z example = 2025-03-17T18:45:23.1802559Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`. 2025-03-17T18:45:23.1802696Z WeightNorm.apply(module, name, dim) 2025-03-17T18:45:23.1802781Z 2025-03-17T18:45:23.1802971Z --- Runtime Warning: 9 / 10 --- 2025-03-17T18:45:23.1807405Z example = 2025-03-17T18:45:23.1808288Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`. 2025-03-17T18:45:23.1808420Z WeightNorm.apply(module, name, dim) 2025-03-17T18:45:23.1808513Z 2025-03-17T18:45:23.1808737Z --- Runtime Warning: 10 / 10 --- 2025-03-17T18:45:23.1809034Z example = 2025-03-17T18:45:23.1809898Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/_export/utils.py:453: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. 2025-03-17T18:45:23.1810016Z _register_pytree_node( 2025-03-17T18:45:23.1810102Z 2025-03-17T18:45:23.1810480Z === 342 passed, 367 skipped, 126 warnings in 12.65 seconds === 2025-03-17T18:45:23.1810687Z Running test_autoload_disable 1/1 ... [2025-03-17 18:45:22.972896] 2025-03-17T18:45:25.5514538Z running install 2025-03-17T18:45:25.5515963Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:45:25.5517501Z !! 2025-03-17T18:45:25.5517704Z 2025-03-17T18:45:25.5517947Z ******************************************************************************** 2025-03-17T18:45:25.5518635Z Please avoid running ``setup.py`` directly. 2025-03-17T18:45:25.5519420Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:45:25.5520128Z standards-based tools. 2025-03-17T18:45:25.5520496Z 2025-03-17T18:45:25.5521069Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:45:25.5522091Z ******************************************************************************** 2025-03-17T18:45:25.5522564Z 2025-03-17T18:45:25.5522731Z !! 2025-03-17T18:45:25.5523143Z self.initialize_options() 2025-03-17T18:45:25.5650344Z running build 2025-03-17T18:45:25.5650821Z running build_py 2025-03-17T18:45:25.5725548Z creating build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:25.5734955Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:25.5744715Z running build_ext 2025-03-17T18:45:25.6521189Z building 'torch_test_cpp_extension.cpp' extension 2025-03-17T18:45:25.6522251Z creating build/temp.linux-x86_64-cpython-313 2025-03-17T18:45:25.6528565Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c extension.cpp -o build/temp.linux-x86_64-cpython-313/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:26.8091252Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/Exceptions.h:12, 2025-03-17T18:45:26.8092289Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2025-03-17T18:45:26.8093220Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/extension.h:9, 2025-03-17T18:45:26.8093799Z from extension.cpp:1: 2025-03-17T18:45:26.8106617Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘class pybind11::class_’: 2025-03-17T18:45:26.8107557Z extension.cpp:45:53: required from here 2025-03-17T18:45:26.8109102Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1539:7: warning: ‘pybind11::class_’ declared with greater visibility than its base ‘pybind11::detail::generic_type’ [-Wattributes] 2025-03-17T18:45:26.8110460Z 1539 | class class_ : public detail::generic_type { 2025-03-17T18:45:26.8110829Z | ^~~~~~ 2025-03-17T18:45:26.8112549Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]’: 2025-03-17T18:45:26.8113970Z extension.cpp:45:53: required from here 2025-03-17T18:45:26.8117393Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1599:28: warning: ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::’ declared with greater visibility than the type of its field ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::::’ [-Wattributes] 2025-03-17T18:45:26.8120157Z 1599 | with_internals([&](internals &internals) { 2025-03-17T18:45:26.8120573Z | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:26.8121116Z 1600 | auto &instances = record.module_local ? get_local_internals().registered_types_cpp 2025-03-17T18:45:26.8121711Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:26.8122180Z 1601 | : internals.registered_types_cpp; 2025-03-17T18:45:26.8122608Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:26.8123044Z 1602 | instances[std::type_index(typeid(type_alias))] 2025-03-17T18:45:26.8123471Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:26.8123885Z 1603 | = instances[std::type_index(typeid(type))]; 2025-03-17T18:45:26.8124297Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:26.8124700Z 1604 | }); 2025-03-17T18:45:26.8125016Z | ~ 2025-03-17T18:45:26.8128272Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:27.1842144Z building 'torch_test_cpp_extension.maia' extension 2025-03-17T18:45:27.1847067Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c maia_extension.cpp -o build/temp.linux-x86_64-cpython-313/maia_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=maia -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:28.2495481Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/maia_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:28.6130184Z building 'torch_test_cpp_extension.rng' extension 2025-03-17T18:45:28.6134246Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c rng_extension.cpp -o build/temp.linux-x86_64-cpython-313/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:30.0639027Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:30.0640579Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:30.0641970Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:30.0643577Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:30.0644312Z from rng_extension.cpp:6: 2025-03-17T18:45:30.0645437Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1158: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:30.0646272Z 1158 | # pragma unroll 2025-03-17T18:45:30.0646543Z | 2025-03-17T18:45:30.0647141Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1198, 2025-03-17T18:45:30.0648297Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:30.0649138Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:30.0650026Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:30.0650963Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:30.0651668Z from rng_extension.cpp:6: 2025-03-17T18:45:30.0652484Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:59: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:30.0653287Z 59 | #pragma unroll 2025-03-17T18:45:30.0653550Z | 2025-03-17T18:45:30.0654248Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:72: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:30.0655040Z 72 | #pragma unroll 2025-03-17T18:45:30.0655289Z | 2025-03-17T18:45:30.0655983Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:87: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:30.0656774Z 87 | #pragma unroll 2025-03-17T18:45:30.0657028Z | 2025-03-17T18:45:30.0657587Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1199, 2025-03-17T18:45:30.0658524Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:30.0659361Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:30.0660176Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:30.0661163Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:30.0661863Z from rng_extension.cpp:6: 2025-03-17T18:45:30.0662694Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_mask.h:153: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:30.0663515Z 153 | #pragma unroll 2025-03-17T18:45:30.0663778Z | 2025-03-17T18:45:30.0667088Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/rng_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:30.4401920Z running install_lib 2025-03-17T18:45:30.4484287Z creating install/opt/conda/envs/py_3.13/lib/python3.13/site-packages 2025-03-17T18:45:30.4487752Z creating install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:30.4489362Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/__init__.py -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:30.4491301Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:30.4574946Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:30.4653925Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:30.4745614Z byte-compiling ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension/__init__.py to __init__.cpython-313.pyc 2025-03-17T18:45:30.4748064Z running install_egg_info 2025-03-17T18:45:30.4925077Z running egg_info 2025-03-17T18:45:30.4992340Z creating torch_test_cpp_extension.egg-info 2025-03-17T18:45:30.4993797Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2025-03-17T18:45:30.4997079Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2025-03-17T18:45:30.4998747Z writing entry points to torch_test_cpp_extension.egg-info/entry_points.txt 2025-03-17T18:45:30.5000721Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2025-03-17T18:45:30.5001887Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:30.5080120Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:30.5087224Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:30.5088953Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info 2025-03-17T18:45:30.5094558Z running install_scripts 2025-03-17T18:45:33.6877107Z 2025-03-17T18:45:33.6877555Z Running tests... 2025-03-17T18:45:33.6877971Z ---------------------------------------------------------------------- 2025-03-17T18:45:33.8232401Z . 2025-03-17T18:45:33.8232750Z ---------------------------------------------------------------------- 2025-03-17T18:45:33.8233159Z Ran 1 test in 0.136s 2025-03-17T18:45:33.8233325Z 2025-03-17T18:45:33.8233567Z OK 2025-03-17T18:45:33.8233689Z 2025-03-17T18:45:33.8233819Z Generating XML reports... 2025-03-17T18:45:34.4396043Z Running test_autoload_enable 1/1 ... [2025-03-17 18:45:34.439294] 2025-03-17T18:45:37.0028539Z running install 2025-03-17T18:45:37.0030062Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:45:37.0030983Z !! 2025-03-17T18:45:37.0031121Z 2025-03-17T18:45:37.0031266Z ******************************************************************************** 2025-03-17T18:45:37.0031719Z Please avoid running ``setup.py`` directly. 2025-03-17T18:45:37.0032210Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:45:37.0032606Z standards-based tools. 2025-03-17T18:45:37.0032858Z 2025-03-17T18:45:37.0033189Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:45:37.0033802Z ******************************************************************************** 2025-03-17T18:45:37.0034067Z 2025-03-17T18:45:37.0034155Z !! 2025-03-17T18:45:37.0034450Z self.initialize_options() 2025-03-17T18:45:37.0160575Z running build 2025-03-17T18:45:37.0160848Z running build_py 2025-03-17T18:45:37.0235860Z creating build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:37.0238585Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:37.0242116Z running build_ext 2025-03-17T18:45:37.1011774Z building 'torch_test_cpp_extension.cpp' extension 2025-03-17T18:45:37.1013082Z creating build/temp.linux-x86_64-cpython-313 2025-03-17T18:45:37.1018506Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c extension.cpp -o build/temp.linux-x86_64-cpython-313/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:38.1498847Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/Exceptions.h:12, 2025-03-17T18:45:38.1499931Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2025-03-17T18:45:38.1500828Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/extension.h:9, 2025-03-17T18:45:38.1501406Z from extension.cpp:1: 2025-03-17T18:45:38.1502738Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘class pybind11::class_’: 2025-03-17T18:45:38.1503617Z extension.cpp:45:53: required from here 2025-03-17T18:45:38.1505117Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1539:7: warning: ‘pybind11::class_’ declared with greater visibility than its base ‘pybind11::detail::generic_type’ [-Wattributes] 2025-03-17T18:45:38.1508482Z 1539 | class class_ : public detail::generic_type { 2025-03-17T18:45:38.1508857Z | ^~~~~~ 2025-03-17T18:45:38.1510587Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]’: 2025-03-17T18:45:38.1512012Z extension.cpp:45:53: required from here 2025-03-17T18:45:38.1515441Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1599:28: warning: ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::’ declared with greater visibility than the type of its field ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::::’ [-Wattributes] 2025-03-17T18:45:38.1518289Z 1599 | with_internals([&](internals &internals) { 2025-03-17T18:45:38.1518684Z | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:38.1519230Z 1600 | auto &instances = record.module_local ? get_local_internals().registered_types_cpp 2025-03-17T18:45:38.1519839Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:38.1520310Z 1601 | : internals.registered_types_cpp; 2025-03-17T18:45:38.1520738Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:38.1521179Z 1602 | instances[std::type_index(typeid(type_alias))] 2025-03-17T18:45:38.1521602Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:38.1522015Z 1603 | = instances[std::type_index(typeid(type))]; 2025-03-17T18:45:38.1522489Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:45:38.1522886Z 1604 | }); 2025-03-17T18:45:38.1523159Z | ~ 2025-03-17T18:45:38.1526415Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:38.5170572Z building 'torch_test_cpp_extension.maia' extension 2025-03-17T18:45:38.5175856Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c maia_extension.cpp -o build/temp.linux-x86_64-cpython-313/maia_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=maia -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:39.4670265Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/maia_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:39.8211088Z building 'torch_test_cpp_extension.rng' extension 2025-03-17T18:45:39.8215393Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c rng_extension.cpp -o build/temp.linux-x86_64-cpython-313/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:40.9182646Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:40.9183596Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:40.9184447Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:40.9185402Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:40.9186267Z from rng_extension.cpp:6: 2025-03-17T18:45:40.9187270Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1158: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:40.9188105Z 1158 | # pragma unroll 2025-03-17T18:45:40.9188375Z | 2025-03-17T18:45:40.9188952Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1198, 2025-03-17T18:45:40.9190058Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:40.9191061Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:40.9191949Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:40.9192886Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:40.9193592Z from rng_extension.cpp:6: 2025-03-17T18:45:40.9194405Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:59: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:40.9195202Z 59 | #pragma unroll 2025-03-17T18:45:40.9195463Z | 2025-03-17T18:45:40.9196162Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:72: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:40.9196952Z 72 | #pragma unroll 2025-03-17T18:45:40.9197212Z | 2025-03-17T18:45:40.9197912Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:87: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:40.9198700Z 87 | #pragma unroll 2025-03-17T18:45:40.9198956Z | 2025-03-17T18:45:40.9199514Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1199, 2025-03-17T18:45:40.9200452Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:45:40.9201287Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:45:40.9202163Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:45:40.9203101Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:45:40.9203801Z from rng_extension.cpp:6: 2025-03-17T18:45:40.9204638Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_mask.h:153: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:45:40.9205456Z 153 | #pragma unroll 2025-03-17T18:45:40.9205717Z | 2025-03-17T18:45:40.9208916Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/rng_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:41.2952444Z running install_lib 2025-03-17T18:45:41.3032818Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:41.3126539Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:41.3219075Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:41.3318876Z running install_egg_info 2025-03-17T18:45:41.3493008Z running egg_info 2025-03-17T18:45:41.3562683Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2025-03-17T18:45:41.3565889Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2025-03-17T18:45:41.3576948Z writing entry points to torch_test_cpp_extension.egg-info/entry_points.txt 2025-03-17T18:45:41.3587514Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2025-03-17T18:45:41.3668810Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:41.3677089Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:41.3687115Z removing './install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info' (and everything under it) 2025-03-17T18:45:41.3688639Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info 2025-03-17T18:45:41.3694296Z running install_scripts 2025-03-17T18:45:44.5312049Z 2025-03-17T18:45:44.5312532Z Running tests... 2025-03-17T18:45:44.5312919Z ---------------------------------------------------------------------- 2025-03-17T18:45:44.6651690Z . 2025-03-17T18:45:44.6652239Z ---------------------------------------------------------------------- 2025-03-17T18:45:44.6652656Z Ran 1 test in 0.134s 2025-03-17T18:45:44.6652823Z 2025-03-17T18:45:44.6652912Z OK 2025-03-17T18:45:44.6653042Z 2025-03-17T18:45:44.6653179Z Generating XML reports... 2025-03-17T18:45:45.2803392Z Running test_cpp_extensions_aot_ninja 1/1 ... [2025-03-17 18:45:45.279983] 2025-03-17T18:45:47.8652098Z running install 2025-03-17T18:45:47.8653411Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:45:47.8654234Z !! 2025-03-17T18:45:47.8654354Z 2025-03-17T18:45:47.8654723Z ******************************************************************************** 2025-03-17T18:45:47.8655132Z Please avoid running ``setup.py`` directly. 2025-03-17T18:45:47.8655562Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:45:47.8655967Z standards-based tools. 2025-03-17T18:45:47.8656163Z 2025-03-17T18:45:47.8656492Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:45:47.8657042Z ******************************************************************************** 2025-03-17T18:45:47.8657298Z 2025-03-17T18:45:47.8657396Z !! 2025-03-17T18:45:47.8657634Z self.initialize_options() 2025-03-17T18:45:47.8786832Z running build 2025-03-17T18:45:47.8787316Z running build_py 2025-03-17T18:45:47.8863135Z creating build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:47.8865367Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:45:47.8868948Z running build_ext 2025-03-17T18:45:47.9930605Z building 'torch_test_cpp_extension.cpp' extension 2025-03-17T18:45:47.9931784Z creating /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313 2025-03-17T18:45:48.0222769Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T18:45:48.0223569Z Compiling objects... 2025-03-17T18:45:48.0223912Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T18:45:48.8242302Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/extension.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:48.8340354Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:49.0771808Z building 'torch_test_cpp_extension.maia' extension 2025-03-17T18:45:49.1065480Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T18:45:49.1072781Z Compiling objects... 2025-03-17T18:45:49.1073194Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T18:45:49.8658890Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/maia_extension.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/maia_extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/maia_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=maia -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:49.8710074Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/maia_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:50.0907704Z building 'torch_test_cpp_extension.rng' extension 2025-03-17T18:45:50.1203634Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T18:45:50.1204411Z Compiling objects... 2025-03-17T18:45:50.1204753Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T18:45:50.9974400Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/rng_extension.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/var/lib/jenkins/workspace/test/cpp_extensions/self_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/rng_extension.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:51.0028754Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/build/temp.linux-x86_64-cpython-313/rng_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:45:51.2437601Z running install_lib 2025-03-17T18:45:51.2519072Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:51.2563298Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:51.2606052Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:45:51.2654641Z running install_egg_info 2025-03-17T18:45:51.2829135Z running egg_info 2025-03-17T18:45:51.2897401Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2025-03-17T18:45:51.2900345Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2025-03-17T18:45:51.2902309Z writing entry points to torch_test_cpp_extension.egg-info/entry_points.txt 2025-03-17T18:45:51.2904497Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2025-03-17T18:45:51.2977300Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:51.2985737Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:45:51.2987213Z removing './install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info' (and everything under it) 2025-03-17T18:45:51.2988846Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info 2025-03-17T18:45:51.2994596Z running install_scripts 2025-03-17T18:45:53.3763941Z running install 2025-03-17T18:45:53.3765027Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:45:53.3766128Z !! 2025-03-17T18:45:53.3766262Z 2025-03-17T18:45:53.3766407Z ******************************************************************************** 2025-03-17T18:45:53.3766810Z Please avoid running ``setup.py`` directly. 2025-03-17T18:45:53.3767240Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:45:53.3767631Z standards-based tools. 2025-03-17T18:45:53.3767838Z 2025-03-17T18:45:53.3768154Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:45:53.3768759Z ******************************************************************************** 2025-03-17T18:45:53.3769019Z 2025-03-17T18:45:53.3769107Z !! 2025-03-17T18:45:53.3769345Z self.initialize_options() 2025-03-17T18:45:53.3894158Z running build 2025-03-17T18:45:53.3894662Z running build_ext 2025-03-17T18:45:53.4948709Z building 'no_python_abi_suffix_test' extension 2025-03-17T18:45:53.4951830Z creating /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313 2025-03-17T18:45:53.5254705Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T18:45:53.5255989Z Compiling objects... 2025-03-17T18:45:53.5256328Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T18:45:53.5963106Z [1/1] c++ -MMD -MF /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/no_python_abi_suffix_test.o.d -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.13/include/python3.13 -c -c /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/no_python_abi_suffix_test.cpp -o /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/no_python_abi_suffix_test.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=no_python_abi_suffix_test -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:45:53.6002659Z creating build/lib.linux-x86_64-cpython-313 2025-03-17T18:45:53.6007895Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/no_python_abi_suffix_test.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/no_python_abi_suffix_test.so 2025-03-17T18:45:53.6553845Z running install_lib 2025-03-17T18:45:53.6636291Z creating install/opt/conda/envs/py_3.13/lib/python3.13/site-packages 2025-03-17T18:45:53.6639378Z copying build/lib.linux-x86_64-cpython-313/no_python_abi_suffix_test.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages 2025-03-17T18:45:53.6644020Z running install_egg_info 2025-03-17T18:45:53.6816594Z running egg_info 2025-03-17T18:45:53.6884131Z creating no_python_abi_suffix_test.egg-info 2025-03-17T18:45:53.6885258Z writing no_python_abi_suffix_test.egg-info/PKG-INFO 2025-03-17T18:45:53.6888490Z writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt 2025-03-17T18:45:53.6890611Z writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt 2025-03-17T18:45:53.6891659Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2025-03-17T18:45:53.6963349Z reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2025-03-17T18:45:53.6969911Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2025-03-17T18:45:53.6971370Z Copying no_python_abi_suffix_test.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/no_python_abi_suffix_test-0.0.0-py3.13.egg-info 2025-03-17T18:45:53.6976114Z running install_scripts 2025-03-17T18:45:54.0928185Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:45:54.0931000Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cpp_extensions_aot_ninja.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:45:54.092866] 2025-03-17T18:46:00.7203161Z 2025-03-17T18:46:00.7204592Z test_cpp_extensions_aot_ninja 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cpp_extensions_aot_ninja_1.1_ac77513849f8a989_.log 2025-03-17T18:46:00.7219104Z Running 20 items in this shard: test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_backward, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_cublas_extension, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_cuda_dlink_libs, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_cuda_extension, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_cusolver_extension, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_extension_function, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_extension_module, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_libtorch_agnostic, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_mps_extension, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_no_python_abi_suffix_sets_the_correct_library_name, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_optional, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_python_agnostic, test/test_cpp_extensions_aot_ninja.py::TestCppExtensionAOT::test_sycl_extension, test/test_cpp_extensions_aot_ninja.py::TestPybindTypeCasters::test_pybind_return_types, test/test_cpp_extensions_aot_ninja.py::TestMAIATensor::test_add, test/test_cpp_extensions_aot_ninja.py::TestMAIATensor::test_conv_backend_override, test/test_cpp_extensions_aot_ninja.py::TestMAIATensor::test_unregistered, test/test_cpp_extensions_aot_ninja.py::TestMAIATensor::test_zeros, test/test_cpp_extensions_aot_ninja.py::TestRNGExtension::test_rng, test/test_cpp_extensions_aot_ninja.py::TestTorchLibrary::test_torch_library 2025-03-17T18:46:00.7232957Z 2025-03-17T18:46:00.7233444Z Running test_cpp_extensions_aot_no_ninja 1/1 ... [2025-03-17 18:46:00.720918] 2025-03-17T18:46:03.2901709Z running install 2025-03-17T18:46:03.2903437Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:46:03.2904261Z !! 2025-03-17T18:46:03.2904382Z 2025-03-17T18:46:03.2904551Z ******************************************************************************** 2025-03-17T18:46:03.2904955Z Please avoid running ``setup.py`` directly. 2025-03-17T18:46:03.2905381Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:46:03.2905781Z standards-based tools. 2025-03-17T18:46:03.2905977Z 2025-03-17T18:46:03.2906308Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:46:03.2906920Z ******************************************************************************** 2025-03-17T18:46:03.2907372Z 2025-03-17T18:46:03.2907478Z !! 2025-03-17T18:46:03.2907721Z self.initialize_options() 2025-03-17T18:46:03.3037322Z running build 2025-03-17T18:46:03.3037604Z running build_py 2025-03-17T18:46:03.3114412Z creating build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:46:03.3116779Z copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension 2025-03-17T18:46:03.3120280Z running build_ext 2025-03-17T18:46:03.3886125Z building 'torch_test_cpp_extension.cpp' extension 2025-03-17T18:46:03.3887375Z creating build/temp.linux-x86_64-cpython-313 2025-03-17T18:46:03.3892675Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c extension.cpp -o build/temp.linux-x86_64-cpython-313/extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:46:04.3683467Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/Exceptions.h:12, 2025-03-17T18:46:04.3685121Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2025-03-17T18:46:04.3686616Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/extension.h:9, 2025-03-17T18:46:04.3687539Z from extension.cpp:1: 2025-03-17T18:46:04.3689184Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘class pybind11::class_’: 2025-03-17T18:46:04.3690080Z extension.cpp:45:53: required from here 2025-03-17T18:46:04.3693167Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1539:7: warning: ‘pybind11::class_’ declared with greater visibility than its base ‘pybind11::detail::generic_type’ [-Wattributes] 2025-03-17T18:46:04.3694769Z 1539 | class class_ : public detail::generic_type { 2025-03-17T18:46:04.3695150Z | ^~~~~~ 2025-03-17T18:46:04.3696843Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h: In instantiation of ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]’: 2025-03-17T18:46:04.3698268Z extension.cpp:45:53: required from here 2025-03-17T18:46:04.3701551Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/pybind11/pybind11.h:1599:28: warning: ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::’ declared with greater visibility than the type of its field ‘pybind11::class_< , >::class_(pybind11::handle, const char*, const Extra& ...) [with Extra = {}; type_ = MatrixMultiplier; options = {}]::::’ [-Wattributes] 2025-03-17T18:46:04.3704305Z 1599 | with_internals([&](internals &internals) { 2025-03-17T18:46:04.3704704Z | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:46:04.3705276Z 1600 | auto &instances = record.module_local ? get_local_internals().registered_types_cpp 2025-03-17T18:46:04.3705873Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:46:04.3706505Z 1601 | : internals.registered_types_cpp; 2025-03-17T18:46:04.3707000Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:46:04.3707442Z 1602 | instances[std::type_index(typeid(type_alias))] 2025-03-17T18:46:04.3707858Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:46:04.3708275Z 1603 | = instances[std::type_index(typeid(type))]; 2025-03-17T18:46:04.3708690Z | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2025-03-17T18:46:04.3709034Z 1604 | }); 2025-03-17T18:46:04.3709349Z | ~ 2025-03-17T18:46:04.3712598Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:46:04.7377623Z building 'torch_test_cpp_extension.maia' extension 2025-03-17T18:46:04.7382264Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c maia_extension.cpp -o build/temp.linux-x86_64-cpython-313/maia_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=maia -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:46:05.6821702Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/maia_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:46:06.0363122Z building 'torch_test_cpp_extension.rng' extension 2025-03-17T18:46:06.0367584Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include -I/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/torch/csrc/api/include -Iself_compiler_include_dirs_test -I/opt/conda/envs/py_3.13/include/python3.13 -c rng_extension.cpp -o build/temp.linux-x86_64-cpython-313/rng_extension.o -g -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_clang\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1002\" -DTORCH_EXTENSION_NAME=rng -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++17 2025-03-17T18:46:07.1499499Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:46:07.1501415Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:46:07.1502805Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:46:07.1504394Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:46:07.1505101Z from rng_extension.cpp:6: 2025-03-17T18:46:07.1505950Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1158: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:46:07.1506963Z 1158 | # pragma unroll 2025-03-17T18:46:07.1507233Z | 2025-03-17T18:46:07.1507803Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1198, 2025-03-17T18:46:07.1508755Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:46:07.1509933Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:46:07.1511071Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:46:07.1512017Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:46:07.1512720Z from rng_extension.cpp:6: 2025-03-17T18:46:07.1513536Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:59: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:46:07.1514338Z 59 | #pragma unroll 2025-03-17T18:46:07.1514601Z | 2025-03-17T18:46:07.1515297Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:72: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:46:07.1516100Z 72 | #pragma unroll 2025-03-17T18:46:07.1516363Z | 2025-03-17T18:46:07.1517056Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_n.h:87: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:46:07.1517846Z 87 | #pragma unroll 2025-03-17T18:46:07.1518197Z | 2025-03-17T18:46:07.1518746Z In file included from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_base.h:1199, 2025-03-17T18:46:07.1519728Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec256/vec256.h:8, 2025-03-17T18:46:07.1520572Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec.h:7, 2025-03-17T18:46:07.1521391Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/Loops.h:37, 2025-03-17T18:46:07.1522335Z from /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/native/cpu/DistributionTemplates.h:9, 2025-03-17T18:46:07.1523034Z from rng_extension.cpp:6: 2025-03-17T18:46:07.1523868Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/include/ATen/cpu/vec/vec_mask.h:153: warning: ignoring #pragma unroll [-Wunknown-pragmas] 2025-03-17T18:46:07.1524672Z 153 | #pragma unroll 2025-03-17T18:46:07.1524931Z | 2025-03-17T18:46:07.1528162Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib build/temp.linux-x86_64-cpython-313/rng_extension.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so 2025-03-17T18:46:07.5255439Z running install_lib 2025-03-17T18:46:07.5335562Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/cpp.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:46:07.5422615Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/maia.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:46:07.5506057Z copying build/lib.linux-x86_64-cpython-313/torch_test_cpp_extension/rng.cpython-313-x86_64-linux-gnu.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension 2025-03-17T18:46:07.5599244Z running install_egg_info 2025-03-17T18:46:07.5773425Z running egg_info 2025-03-17T18:46:07.5842005Z writing torch_test_cpp_extension.egg-info/PKG-INFO 2025-03-17T18:46:07.5845367Z writing dependency_links to torch_test_cpp_extension.egg-info/dependency_links.txt 2025-03-17T18:46:07.5847334Z writing entry points to torch_test_cpp_extension.egg-info/entry_points.txt 2025-03-17T18:46:07.5849458Z writing top-level names to torch_test_cpp_extension.egg-info/top_level.txt 2025-03-17T18:46:07.5923296Z reading manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:46:07.5931733Z writing manifest file 'torch_test_cpp_extension.egg-info/SOURCES.txt' 2025-03-17T18:46:07.5933390Z removing './install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info' (and everything under it) 2025-03-17T18:46:07.5935077Z Copying torch_test_cpp_extension.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch_test_cpp_extension-0.0.0-py3.13.egg-info 2025-03-17T18:46:07.5942334Z running install_scripts 2025-03-17T18:46:09.6624158Z running install 2025-03-17T18:46:09.6625292Z /opt/conda/envs/py_3.13/lib/python3.13/site-packages/setuptools/_distutils/cmd.py:79: SetuptoolsDeprecationWarning: setup.py install is deprecated. 2025-03-17T18:46:09.6626116Z !! 2025-03-17T18:46:09.6626252Z 2025-03-17T18:46:09.6627024Z ******************************************************************************** 2025-03-17T18:46:09.6627455Z Please avoid running ``setup.py`` directly. 2025-03-17T18:46:09.6627884Z Instead, use pypa/build, pypa/installer or other 2025-03-17T18:46:09.6628283Z standards-based tools. 2025-03-17T18:46:09.6628492Z 2025-03-17T18:46:09.6628814Z See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. 2025-03-17T18:46:09.6629369Z ******************************************************************************** 2025-03-17T18:46:09.6629631Z 2025-03-17T18:46:09.6629718Z !! 2025-03-17T18:46:09.6629966Z self.initialize_options() 2025-03-17T18:46:09.6757581Z running build 2025-03-17T18:46:09.6758067Z running build_ext 2025-03-17T18:46:09.7819001Z building 'no_python_abi_suffix_test' extension 2025-03-17T18:46:09.8114240Z Emitting ninja build file /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/build.ninja... 2025-03-17T18:46:09.8115211Z Compiling objects... 2025-03-17T18:46:09.8115540Z Using envvar MAX_JOBS (6) as the number of workers... 2025-03-17T18:46:09.8370538Z ninja: no work to do. 2025-03-17T18:46:09.8415987Z g++ -pthread -B /opt/conda/envs/py_3.13/compiler_compat -fno-strict-overflow -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -fPIC -O2 -isystem /opt/conda/envs/py_3.13/include -pthread -B /opt/conda/envs/py_3.13/compiler_compat -shared -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib -Wl,-rpath,/opt/conda/envs/py_3.13/lib -Wl,-rpath-link,/opt/conda/envs/py_3.13/lib -L/opt/conda/envs/py_3.13/lib /var/lib/jenkins/workspace/test/cpp_extensions/no_python_abi_suffix_test/build/temp.linux-x86_64-cpython-313/no_python_abi_suffix_test.o -L/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib -lc10 -ltorch -ltorch_cpu -ltorch_python -o build/lib.linux-x86_64-cpython-313/no_python_abi_suffix_test.so 2025-03-17T18:46:09.8964113Z running install_lib 2025-03-17T18:46:09.9043952Z copying build/lib.linux-x86_64-cpython-313/no_python_abi_suffix_test.so -> ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages 2025-03-17T18:46:09.9049027Z running install_egg_info 2025-03-17T18:46:09.9221958Z running egg_info 2025-03-17T18:46:09.9290214Z writing no_python_abi_suffix_test.egg-info/PKG-INFO 2025-03-17T18:46:09.9293891Z writing dependency_links to no_python_abi_suffix_test.egg-info/dependency_links.txt 2025-03-17T18:46:09.9306949Z writing top-level names to no_python_abi_suffix_test.egg-info/top_level.txt 2025-03-17T18:46:09.9388776Z reading manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2025-03-17T18:46:09.9396844Z writing manifest file 'no_python_abi_suffix_test.egg-info/SOURCES.txt' 2025-03-17T18:46:09.9408493Z removing './install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/no_python_abi_suffix_test-0.0.0-py3.13.egg-info' (and everything under it) 2025-03-17T18:46:09.9410720Z Copying no_python_abi_suffix_test.egg-info to ./install/opt/conda/envs/py_3.13/lib/python3.13/site-packages/no_python_abi_suffix_test-0.0.0-py3.13.egg-info 2025-03-17T18:46:09.9416273Z running install_scripts 2025-03-17T18:46:10.3374183Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:10.3376858Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cpp_extensions_aot_no_ninja.py', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:10.337462] 2025-03-17T18:46:17.0201944Z 2025-03-17T18:46:17.0203050Z test_cpp_extensions_aot_no_ninja 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cpp_extensions_aot_no_ninja_1.1_de96bfc24b04b98a_.log 2025-03-17T18:46:17.0211543Z Running 20 items in this shard: test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_backward, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_cublas_extension, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_cuda_dlink_libs, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_cuda_extension, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_cusolver_extension, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_extension_function, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_extension_module, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_libtorch_agnostic, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_mps_extension, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_no_python_abi_suffix_sets_the_correct_library_name, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_optional, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_python_agnostic, test/test_cpp_extensions_aot_no_ninja.py::TestCppExtensionAOT::test_sycl_extension, test/test_cpp_extensions_aot_no_ninja.py::TestPybindTypeCasters::test_pybind_return_types, test/test_cpp_extensions_aot_no_ninja.py::TestMAIATensor::test_add, test/test_cpp_extensions_aot_no_ninja.py::TestMAIATensor::test_conv_backend_override, test/test_cpp_extensions_aot_no_ninja.py::TestMAIATensor::test_unregistered, test/test_cpp_extensions_aot_no_ninja.py::TestMAIATensor::test_zeros, test/test_cpp_extensions_aot_no_ninja.py::TestRNGExtension::test_rng, test/test_cpp_extensions_aot_no_ninja.py::TestTorchLibrary::test_torch_library 2025-03-17T18:46:17.0218839Z 2025-03-17T18:46:17.0219091Z Running dynamo/test_dynamic_shapes 1/1 ... [2025-03-17 18:46:17.020454] 2025-03-17T18:46:17.0219629Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:17.0220838Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_dynamic_shapes.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:17.020761] 2025-03-17T18:46:22.1541440Z 2025-03-17T18:46:22.1542563Z dynamo/test_dynamic_shapes 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_dynamic_shapes_1.1_66148cf7cc91d005_.log 2025-03-17T18:46:22.1543374Z 2025-03-17T18:46:22.1544874Z Running dynamo/test_interop 1/1 ... [2025-03-17 18:46:22.154306] 2025-03-17T18:46:22.1545577Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:22.1548905Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_interop.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:22.154626] 2025-03-17T18:46:25.2450650Z 2025-03-17T18:46:25.2451943Z dynamo/test_interop 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_interop_1.1_ca65cc9a1e043938_.log 2025-03-17T18:46:25.2452652Z 2025-03-17T18:46:25.2454528Z Running test_appending_byte_serializer 1/1 ... [2025-03-17 18:46:25.245248] 2025-03-17T18:46:25.2455235Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:25.2458334Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_appending_byte_serializer.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:25.245572] 2025-03-17T18:46:28.3702012Z 2025-03-17T18:46:28.3703355Z test_appending_byte_serializer 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_appending_byte_serializer_1.1_a63dcbab84b0fa4e_.log 2025-03-17T18:46:28.3704257Z 2025-03-17T18:46:28.3705575Z Running dynamo/test_sdpa 1/1 ... [2025-03-17 18:46:28.370366] 2025-03-17T18:46:28.3706015Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:28.3709501Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sdpa.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:28.370700] 2025-03-17T18:46:31.4770956Z 2025-03-17T18:46:31.4772026Z dynamo/test_sdpa 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sdpa_1.1_8e7906986b35d8b0_.log 2025-03-17T18:46:31.4772852Z 2025-03-17T18:46:31.4773130Z Running dynamo/test_frame_init 1/1 ... [2025-03-17 18:46:31.477137] 2025-03-17T18:46:31.4773590Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:31.4776497Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_frame_init.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:31.477441] 2025-03-17T18:46:34.5680992Z 2025-03-17T18:46:34.5681986Z dynamo/test_frame_init 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_frame_init_1.1_c13ece9b00021916_.log 2025-03-17T18:46:34.5682687Z 2025-03-17T18:46:34.5684455Z Running dynamo/test_sys 1/1 ... [2025-03-17 18:46:34.568261] 2025-03-17T18:46:34.5684883Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:34.5687949Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sys.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:34.568584] 2025-03-17T18:46:37.6756811Z 2025-03-17T18:46:37.6757732Z dynamo/test_sys 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sys_1.1_64cb24f9e574fa3b_.log 2025-03-17T18:46:37.6758803Z 2025-03-17T18:46:37.6759430Z Running dynamo/test_trace_rules 1/1 ... [2025-03-17 18:46:37.675741] 2025-03-17T18:46:37.6759981Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:37.6763060Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_trace_rules.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:37.676063] 2025-03-17T18:46:40.7616774Z 2025-03-17T18:46:40.7617875Z dynamo/test_trace_rules 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_trace_rules_1.1_cfad119dedf48a0d_.log 2025-03-17T18:46:40.7618833Z 2025-03-17T18:46:40.7619766Z Running dynamo/test_config 1/1 ... [2025-03-17 18:46:40.761827] 2025-03-17T18:46:40.7620213Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:40.7623608Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_config.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:40.762151] 2025-03-17T18:46:43.8485215Z 2025-03-17T18:46:43.8486208Z dynamo/test_config 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_config_1.1_9dbd401c49538040_.log 2025-03-17T18:46:43.8486941Z 2025-03-17T18:46:43.8488424Z Running test_jiterator 1/1 ... [2025-03-17 18:46:43.848652] 2025-03-17T18:46:43.8488974Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:43.8491973Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_jiterator.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:43.848967] 2025-03-17T18:46:47.0451738Z 2025-03-17T18:46:47.0453078Z test_jiterator 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_jiterator_1.1_cb77d15636b8a39e_.log 2025-03-17T18:46:47.0454365Z Running 0 items in this shard: 2025-03-17T18:46:47.0454651Z 2025-03-17T18:46:47.0456342Z Running dynamo/test_sources 1/1 ... [2025-03-17 18:46:47.045448] 2025-03-17T18:46:47.0457238Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:47.0461760Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sources.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:47.045864] 2025-03-17T18:46:50.1311394Z 2025-03-17T18:46:50.1312822Z dynamo/test_sources 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sources_1.1_b9cfbc1c36502625_.log 2025-03-17T18:46:50.1313937Z 2025-03-17T18:46:50.1315989Z Running dynamo/test_optimizers 1/1 ... [2025-03-17 18:46:50.131391] 2025-03-17T18:46:50.1316727Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:50.1320936Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_optimizers.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:50.131789] 2025-03-17T18:46:53.2315483Z 2025-03-17T18:46:53.2316493Z dynamo/test_optimizers 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_optimizers_1.1_3a168843ab7b6642_.log 2025-03-17T18:46:53.2317379Z 2025-03-17T18:46:53.2318052Z Running dynamo/test_metrics_context 1/1 ... [2025-03-17 18:46:53.231639] 2025-03-17T18:46:53.2318540Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:53.2322099Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_metrics_context.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:53.231968] 2025-03-17T18:46:56.3297258Z 2025-03-17T18:46:56.3298310Z dynamo/test_metrics_context 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_metrics_context_1.1_bf6b4aa32404aa0b_.log 2025-03-17T18:46:56.3299089Z 2025-03-17T18:46:56.3300310Z Running xpu/test_conv 1/1 ... [2025-03-17 18:46:56.329877] 2025-03-17T18:46:56.3300920Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:56.3305625Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'xpu/test_conv.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:56.330227] 2025-03-17T18:46:59.7714362Z 2025-03-17T18:46:59.7715526Z xpu/test_conv 1/1 was successful, full logs can be found in artifacts with path test/test-reports/xpu.test_conv_1.1_b077d59cfa130752_.log 2025-03-17T18:46:59.7716312Z Running 0 items in this shard: 2025-03-17T18:46:59.7716514Z 2025-03-17T18:46:59.7717969Z Running dynamo/test_python_dispatcher 1/1 ... [2025-03-17 18:46:59.771606] 2025-03-17T18:46:59.7718699Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:46:59.7721715Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_python_dispatcher.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:46:59.771915] 2025-03-17T18:47:02.8776077Z 2025-03-17T18:47:02.8777443Z dynamo/test_python_dispatcher 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_python_dispatcher_1.1_ffe97d0e60fa0bc8_.log 2025-03-17T18:47:02.8778301Z 2025-03-17T18:47:02.8779228Z Running test_hub 1/1 ... [2025-03-17 18:47:02.877753] 2025-03-17T18:47:02.8779898Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:02.8782728Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_hub.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:02.878068] 2025-03-17T18:47:05.9660877Z 2025-03-17T18:47:05.9662098Z test_hub 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_hub_1.1_2bb48e25dcbe5c38_.log 2025-03-17T18:47:05.9662677Z 2025-03-17T18:47:05.9664737Z Running dynamo/test_flat_apply 1/1 ... [2025-03-17 18:47:05.966268] 2025-03-17T18:47:05.9665333Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:05.9668571Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_flat_apply.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:05.966588] 2025-03-17T18:47:09.0513895Z 2025-03-17T18:47:09.0515124Z dynamo/test_flat_apply 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_flat_apply_1.1_891f51f0de7e51d7_.log 2025-03-17T18:47:09.0515855Z 2025-03-17T18:47:09.0517507Z Running xpu/test_gemm 1/1 ... [2025-03-17 18:47:09.051564] 2025-03-17T18:47:09.0518213Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:09.0521454Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'xpu/test_gemm.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:09.051895] 2025-03-17T18:47:12.1651765Z 2025-03-17T18:47:12.1652846Z xpu/test_gemm 1/1 was successful, full logs can be found in artifacts with path test/test-reports/xpu.test_gemm_1.1_96cd9b90a122d9e0_.log 2025-03-17T18:47:12.1654143Z Running 0 items in this shard: 2025-03-17T18:47:12.1654580Z 2025-03-17T18:47:12.1655228Z Running dynamo/test_verify_correctness 1/1 ... [2025-03-17 18:47:12.165350] 2025-03-17T18:47:12.1655800Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:12.1658817Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_verify_correctness.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:12.165667] 2025-03-17T18:47:15.2660287Z 2025-03-17T18:47:15.2661617Z dynamo/test_verify_correctness 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_verify_correctness_1.1_8b5729137063c427_.log 2025-03-17T18:47:15.2662803Z 2025-03-17T18:47:15.2663582Z Running test_cuda_expandable_segments 1/1 ... [2025-03-17 18:47:15.266172] 2025-03-17T18:47:15.2664293Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:15.2667393Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cuda_expandable_segments.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:15.266466] 2025-03-17T18:47:19.7055519Z 2025-03-17T18:47:19.7056804Z test_cuda_expandable_segments 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cuda_expandable_segments_1.1_9c51a554ba2820a5_.log 2025-03-17T18:47:19.7057587Z 2025-03-17T18:47:19.7059234Z Running dynamo/test_debug_utils 1/1 ... [2025-03-17 18:47:19.705733] 2025-03-17T18:47:19.7059932Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:19.7063188Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_debug_utils.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:19.706054] 2025-03-17T18:47:23.3251814Z 2025-03-17T18:47:23.3253118Z dynamo/test_debug_utils 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_debug_utils_1.1_8c980ec79db20671_.log 2025-03-17T18:47:23.3253871Z 2025-03-17T18:47:23.3255343Z Running dynamo/test_structured_trace 1/1 ... [2025-03-17 18:47:23.325339] 2025-03-17T18:47:23.3256063Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:23.3259378Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_structured_trace.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:23.325664] 2025-03-17T18:47:26.9397902Z 2025-03-17T18:47:26.9398957Z dynamo/test_structured_trace 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_structured_trace_1.1_75cf2d42e2d17a8a_.log 2025-03-17T18:47:26.9399744Z 2025-03-17T18:47:26.9402727Z Running test_matmul_cuda 1/1 ... [2025-03-17 18:47:26.939966] 2025-03-17T18:47:26.9403259Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:26.9405191Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_matmul_cuda.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:26.940285] 2025-03-17T18:47:30.1574303Z 2025-03-17T18:47:30.1575501Z test_matmul_cuda 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_matmul_cuda_1.1_f289c0b4ff44df5a_.log 2025-03-17T18:47:30.1576275Z Running 0 items in this shard: 2025-03-17T18:47:30.1576570Z 2025-03-17T18:47:30.1577986Z Running dynamo/test_aot_autograd 1/1 ... [2025-03-17 18:47:30.157619] 2025-03-17T18:47:30.1578657Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:30.1582129Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_aot_autograd.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:30.157930] 2025-03-17T18:47:33.2559386Z 2025-03-17T18:47:33.2560628Z dynamo/test_aot_autograd 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_aot_autograd_1.1_0c85651929c4f40a_.log 2025-03-17T18:47:33.2561375Z 2025-03-17T18:47:33.2563113Z Running dynamo/test_higher_order_ops 1/1 ... [2025-03-17 18:47:33.256111] 2025-03-17T18:47:33.2563763Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:33.2566655Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_higher_order_ops.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:33.256419] 2025-03-17T18:47:37.6392156Z 2025-03-17T18:47:37.6393480Z dynamo/test_higher_order_ops 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_higher_order_ops_1.1_f0d94ee1b03b1a8f_.log 2025-03-17T18:47:37.6394275Z 2025-03-17T18:47:37.6395957Z Running dynamo/test_aot_autograd_cache 1/1 ... [2025-03-17 18:47:37.639391] 2025-03-17T18:47:37.6396609Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:37.6399690Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_aot_autograd_cache.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:37.639715] 2025-03-17T18:47:41.2582803Z 2025-03-17T18:47:41.2583892Z dynamo/test_aot_autograd_cache 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_aot_autograd_cache_1.1_33b64123222a7b5c_.log 2025-03-17T18:47:41.2584667Z 2025-03-17T18:47:41.2586313Z Running dynamo/test_exc 1/1 ... [2025-03-17 18:47:41.258453] 2025-03-17T18:47:41.2586829Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:41.2589890Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_exc.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:41.258767] 2025-03-17T18:47:44.3591408Z 2025-03-17T18:47:44.3592538Z dynamo/test_exc 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_exc_1.1_18493546b479bbe0_.log 2025-03-17T18:47:44.3593192Z 2025-03-17T18:47:44.3594642Z Running test_cuda_multigpu 1/1 ... [2025-03-17 18:47:44.359309] 2025-03-17T18:47:44.3595100Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:44.3598978Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cuda_multigpu.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:44.359616] 2025-03-17T18:47:47.5783043Z 2025-03-17T18:47:47.5783999Z test_cuda_multigpu 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cuda_multigpu_1.1_558c6263cf1e8ea1_.log 2025-03-17T18:47:47.5784801Z Running 0 items in this shard: 2025-03-17T18:47:47.5785023Z 2025-03-17T18:47:47.5786812Z Running dynamo/test_ctx_manager 1/1 ... [2025-03-17 18:47:47.578511] 2025-03-17T18:47:47.5787376Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:47.5790712Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_ctx_manager.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:47.578846] 2025-03-17T18:47:51.2090413Z 2025-03-17T18:47:51.2091409Z dynamo/test_ctx_manager 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_ctx_manager_1.1_a8596f376e0da0ce_.log 2025-03-17T18:47:51.2092342Z 2025-03-17T18:47:51.2094169Z Running dynamo/test_minifier 1/1 ... [2025-03-17 18:47:51.209225] 2025-03-17T18:47:51.2094631Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:51.2097778Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_minifier.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:51.209556] 2025-03-17T18:47:54.2946386Z 2025-03-17T18:47:54.2947484Z dynamo/test_minifier 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_minifier_1.1_343af30e8b933cef_.log 2025-03-17T18:47:54.2948433Z 2025-03-17T18:47:54.2950182Z Running dynamo/test_reorder_logs 1/1 ... [2025-03-17 18:47:54.294848] 2025-03-17T18:47:54.2950656Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:54.2954329Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_reorder_logs.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:54.295191] 2025-03-17T18:47:57.3897719Z 2025-03-17T18:47:57.3899047Z dynamo/test_reorder_logs 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_reorder_logs_1.1_81783ea7b5f9265f_.log 2025-03-17T18:47:57.3899785Z 2025-03-17T18:47:57.3901147Z Running test_linalg 4/4 ... [2025-03-17 18:47:57.389946] 2025-03-17T18:47:57.3901545Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:47:57.3905231Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_linalg.py', '-m', 'serial', '--shard-id=4', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:47:57.390263] 2025-03-17T18:48:01.2093638Z 2025-03-17T18:48:01.2094790Z test_linalg 4/4 was successful, full logs can be found in artifacts with path test/test-reports/test_linalg_4.4_f38c51a687fd4567_.log 2025-03-17T18:48:01.2095555Z Running 0 items in this shard: 2025-03-17T18:48:01.2095758Z 2025-03-17T18:48:01.2098034Z Running dynamo/test_python_autograd 1/1 ... [2025-03-17 18:48:01.209589] 2025-03-17T18:48:01.2098592Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:01.2102504Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_python_autograd.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:01.209961] 2025-03-17T18:48:04.3194499Z 2025-03-17T18:48:04.3195511Z dynamo/test_python_autograd 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_python_autograd_1.1_e6153da66cc6ad4f_.log 2025-03-17T18:48:04.3196285Z 2025-03-17T18:48:04.3275835Z Running dynamo/test_dynamic_shapes 1/1 ... [2025-03-17 18:48:04.327306] 2025-03-17T18:48:04.3276652Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:04.3281388Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_dynamic_shapes.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:04.327774] 2025-03-17T18:48:04.3283465Z Running dynamo/test_interop 1/1 ... [2025-03-17 18:48:04.327767] 2025-03-17T18:48:04.3284122Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:04.3284888Z Running test_appending_byte_serializer 1/1 ... [2025-03-17 18:48:04.327809] 2025-03-17T18:48:04.3285802Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:04.3287226Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_interop.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:04.328193] 2025-03-17T18:48:04.3289699Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_appending_byte_serializer.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:04.328270] 2025-03-17T18:48:07.6937252Z 2025-03-17T18:48:07.6938685Z test_appending_byte_serializer 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_appending_byte_serializer_1.1_5f9268ab8a0f92e4_.log 2025-03-17T18:48:07.6939561Z 2025-03-17T18:48:07.6943305Z 2025-03-17T18:48:07.6944614Z dynamo/test_interop 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_interop_1.1_cc5f5f7f74c8a699_.log 2025-03-17T18:48:07.6945604Z 2025-03-17T18:48:09.5776362Z Uploading artifacts took 1.88 seconds 2025-03-17T18:48:09.5779256Z 2025-03-17T18:48:09.5780534Z dynamo/test_dynamic_shapes 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_dynamic_shapes_1.1_02f0defef9ed58ab_.log 2025-03-17T18:48:09.5781937Z 2025-03-17T18:48:11.5256064Z Running dynamo/test_sdpa 1/1 ... [2025-03-17 18:48:11.525206] 2025-03-17T18:48:11.5256902Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:11.5258992Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sdpa.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:11.525570] 2025-03-17T18:48:11.5614107Z Running dynamo/test_frame_init 1/1 ... [2025-03-17 18:48:11.561033] 2025-03-17T18:48:11.5614958Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:11.5617071Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_frame_init.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:11.561356] 2025-03-17T18:48:13.1004985Z Running dynamo/test_sys 1/1 ... [2025-03-17 18:48:13.100065] 2025-03-17T18:48:13.1005731Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:13.1008131Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sys.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:13.100440] 2025-03-17T18:48:14.9481347Z 2025-03-17T18:48:14.9483006Z dynamo/test_frame_init 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_frame_init_1.1_fcab03d7021b6bc9_.log 2025-03-17T18:48:14.9484370Z 2025-03-17T18:48:14.9494029Z 2025-03-17T18:48:16.4445118Z dynamo/test_sdpa 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sdpa_1.1_7a27abd0e2b827af_.log 2025-03-17T18:48:16.4446322Z 2025-03-17T18:48:16.4446330Z 2025-03-17T18:48:16.4447237Z dynamo/test_sys 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sys_1.1_4391828934f55a76_.log 2025-03-17T18:48:16.4448340Z 2025-03-17T18:48:18.4681065Z Running dynamo/test_trace_rules 1/1 ... [2025-03-17 18:48:18.467662] 2025-03-17T18:48:18.4681932Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:18.4684650Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_trace_rules.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:18.468001] 2025-03-17T18:48:18.4781086Z Running dynamo/test_config 1/1 ... [2025-03-17 18:48:18.477815] 2025-03-17T18:48:18.4781895Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:18.4785278Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_config.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:18.478163] 2025-03-17T18:48:19.9064433Z Running test_jiterator 1/1 ... [2025-03-17 18:48:19.906034] 2025-03-17T18:48:19.9065143Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:19.9067455Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_jiterator.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:19.906395] 2025-03-17T18:48:21.8561609Z 2025-03-17T18:48:21.8563029Z dynamo/test_trace_rules 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_trace_rules_1.1_abffc84f018341c1_.log 2025-03-17T18:48:21.8564234Z 2025-03-17T18:48:21.8930831Z 2025-03-17T18:48:21.8932178Z dynamo/test_config 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_config_1.1_fada092543466d71_.log 2025-03-17T18:48:21.8933265Z 2025-03-17T18:48:23.3856175Z 2025-03-17T18:48:23.3857740Z test_jiterator 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_jiterator_1.1_73e510f607dd847c_.log 2025-03-17T18:48:23.3858548Z Running 0 items in this shard: 2025-03-17T18:48:23.3858768Z 2025-03-17T18:48:25.4468367Z Running dynamo/test_sources 1/1 ... [2025-03-17 18:48:25.446453] 2025-03-17T18:48:25.4468928Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:25.4471210Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_sources.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:25.446842] 2025-03-17T18:48:25.4556785Z Running dynamo/test_optimizers 1/1 ... [2025-03-17 18:48:25.455455] 2025-03-17T18:48:25.4557603Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:25.4560549Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_optimizers.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:25.455787] 2025-03-17T18:48:26.9287323Z Running dynamo/test_metrics_context 1/1 ... [2025-03-17 18:48:26.928275] 2025-03-17T18:48:26.9288068Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:26.9289412Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_metrics_context.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:26.928630] 2025-03-17T18:48:28.7878049Z 2025-03-17T18:48:28.7879064Z dynamo/test_optimizers 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_optimizers_1.1_140c1d8176c3643a_.log 2025-03-17T18:48:28.7879807Z 2025-03-17T18:48:28.7956744Z 2025-03-17T18:48:28.7957958Z dynamo/test_sources 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_sources_1.1_42d698941c70731f_.log 2025-03-17T18:48:28.7958662Z 2025-03-17T18:48:30.2816195Z 2025-03-17T18:48:30.2817855Z dynamo/test_metrics_context 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_metrics_context_1.1_979b9d8b58e382dd_.log 2025-03-17T18:48:30.2819213Z 2025-03-17T18:48:32.3321758Z Running xpu/test_conv 1/1 ... [2025-03-17 18:48:32.331804] 2025-03-17T18:48:32.3322510Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:32.3324941Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'xpu/test_conv.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:32.332176] 2025-03-17T18:48:32.3615089Z Running dynamo/test_python_dispatcher 1/1 ... [2025-03-17 18:48:32.361136] 2025-03-17T18:48:32.3615613Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:32.3617939Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_python_dispatcher.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:32.361512] 2025-03-17T18:48:33.7749583Z Running test_hub 1/1 ... [2025-03-17 18:48:33.774529] 2025-03-17T18:48:33.7750353Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:33.7752528Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_hub.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:33.774891] 2025-03-17T18:48:35.7300850Z 2025-03-17T18:48:35.7302133Z dynamo/test_python_dispatcher 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_python_dispatcher_1.1_52e8b0679f510c03_.log 2025-03-17T18:48:35.7303433Z 2025-03-17T18:48:36.0423204Z 2025-03-17T18:48:36.0424494Z xpu/test_conv 1/1 was successful, full logs can be found in artifacts with path test/test-reports/xpu.test_conv_1.1_12aa43f3a0447b1a_.log 2025-03-17T18:48:36.0425694Z Running 0 items in this shard: 2025-03-17T18:48:36.0426009Z 2025-03-17T18:48:37.1356753Z 2025-03-17T18:48:37.1358048Z test_hub 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_hub_1.1_749272caad8b55de_.log 2025-03-17T18:48:37.1359192Z 2025-03-17T18:48:39.3026766Z Running dynamo/test_flat_apply 1/1 ... [2025-03-17 18:48:39.302237] 2025-03-17T18:48:39.3027707Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:39.3030437Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_flat_apply.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:39.302642] 2025-03-17T18:48:39.6787174Z Running xpu/test_gemm 1/1 ... [2025-03-17 18:48:39.678351] 2025-03-17T18:48:39.6787981Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:39.6790608Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'xpu/test_gemm.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:39.678721] 2025-03-17T18:48:40.7006144Z Running dynamo/test_verify_correctness 1/1 ... [2025-03-17 18:48:40.700201] 2025-03-17T18:48:40.7006964Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:40.7009304Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_verify_correctness.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:40.700561] 2025-03-17T18:48:42.6799560Z 2025-03-17T18:48:42.6801018Z dynamo/test_flat_apply 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_flat_apply_1.1_9601024a04ce8f52_.log 2025-03-17T18:48:42.6801779Z 2025-03-17T18:48:43.0121399Z 2025-03-17T18:48:43.0122940Z xpu/test_gemm 1/1 was successful, full logs can be found in artifacts with path test/test-reports/xpu.test_gemm_1.1_0ff2d3eaaf944bca_.log 2025-03-17T18:48:43.0124059Z Running 0 items in this shard: 2025-03-17T18:48:43.0124402Z 2025-03-17T18:48:44.0961260Z 2025-03-17T18:48:44.0962423Z dynamo/test_verify_correctness 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_verify_correctness_1.1_3965078b9cf458c9_.log 2025-03-17T18:48:44.0963234Z 2025-03-17T18:48:46.3184485Z Running test_cuda_expandable_segments 1/1 ... [2025-03-17 18:48:46.318032] 2025-03-17T18:48:46.3186385Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:46.3188816Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cuda_expandable_segments.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:46.318421] 2025-03-17T18:48:46.6113518Z Running dynamo/test_debug_utils 1/1 ... [2025-03-17 18:48:46.610955] 2025-03-17T18:48:46.6114387Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:46.6116505Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_debug_utils.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:46.611302] 2025-03-17T18:48:47.5863159Z Running dynamo/test_structured_trace 1/1 ... [2025-03-17 18:48:47.585925] 2025-03-17T18:48:47.5863749Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:47.5866354Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_structured_trace.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:47.586319] 2025-03-17T18:48:50.5519764Z 2025-03-17T18:48:50.5521122Z dynamo/test_debug_utils 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_debug_utils_1.1_39f38591d739c0a6_.log 2025-03-17T18:48:50.5522344Z 2025-03-17T18:48:51.0705403Z 2025-03-17T18:48:51.0706649Z test_cuda_expandable_segments 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cuda_expandable_segments_1.1_7cb4f6e3c71443a5_.log 2025-03-17T18:48:51.0707431Z 2025-03-17T18:48:51.5472513Z 2025-03-17T18:48:51.5473949Z dynamo/test_structured_trace 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_structured_trace_1.1_34635567f924920c_.log 2025-03-17T18:48:51.5474755Z 2025-03-17T18:48:54.1688349Z Running test_matmul_cuda 1/1 ... [2025-03-17 18:48:54.168426] 2025-03-17T18:48:54.1689160Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:54.1691198Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_matmul_cuda.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:54.168776] 2025-03-17T18:48:54.7051465Z Running dynamo/test_aot_autograd 1/1 ... [2025-03-17 18:48:54.704770] 2025-03-17T18:48:54.7052152Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:54.7054274Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_aot_autograd.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:54.705129] 2025-03-17T18:48:55.0431611Z Running dynamo/test_higher_order_ops 1/1 ... [2025-03-17 18:48:55.042773] 2025-03-17T18:48:55.0432299Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:48:55.0433822Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_higher_order_ops.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:48:55.043126] 2025-03-17T18:48:57.6437602Z 2025-03-17T18:48:57.6438891Z test_matmul_cuda 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_matmul_cuda_1.1_ce4df283cb806ddf_.log 2025-03-17T18:48:57.6439698Z Running 0 items in this shard: 2025-03-17T18:48:57.6439912Z 2025-03-17T18:48:58.0546957Z 2025-03-17T18:48:58.0548494Z dynamo/test_aot_autograd 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_aot_autograd_1.1_f60138f91279311e_.log 2025-03-17T18:48:58.0550169Z 2025-03-17T18:48:59.7411316Z 2025-03-17T18:48:59.7412338Z dynamo/test_higher_order_ops 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_higher_order_ops_1.1_709c7a07840e0096_.log 2025-03-17T18:48:59.7413110Z 2025-03-17T18:49:01.1991536Z Running dynamo/test_aot_autograd_cache 1/1 ... [2025-03-17 18:49:01.198696] 2025-03-17T18:49:01.1992640Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:01.1994910Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_aot_autograd_cache.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:01.199070] 2025-03-17T18:49:01.6643903Z Running dynamo/test_exc 1/1 ... [2025-03-17 18:49:01.664002] 2025-03-17T18:49:01.6644377Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:01.6647159Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_exc.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:01.664439] 2025-03-17T18:49:03.2342192Z Running test_cuda_multigpu 1/1 ... [2025-03-17 18:49:03.233817] 2025-03-17T18:49:03.2342961Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:03.2344988Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_cuda_multigpu.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:03.234171] 2025-03-17T18:49:05.1041938Z 2025-03-17T18:49:05.1043163Z dynamo/test_exc 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_exc_1.1_14ed4e83c8691eaa_.log 2025-03-17T18:49:05.1044171Z 2025-03-17T18:49:05.1779623Z 2025-03-17T18:49:05.1781086Z dynamo/test_aot_autograd_cache 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_aot_autograd_cache_1.1_0acb90ccdb589c5c_.log 2025-03-17T18:49:05.1781895Z 2025-03-17T18:49:06.7662329Z 2025-03-17T18:49:06.7664060Z test_cuda_multigpu 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_cuda_multigpu_1.1_248bddab2d96d73a_.log 2025-03-17T18:49:06.7665458Z Running 0 items in this shard: 2025-03-17T18:49:06.7665827Z 2025-03-17T18:49:08.6451311Z Running dynamo/test_ctx_manager 1/1 ... [2025-03-17 18:49:08.644728] 2025-03-17T18:49:08.6452237Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:08.6454931Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_ctx_manager.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:08.645079] 2025-03-17T18:49:08.7318569Z Running dynamo/test_minifier 1/1 ... [2025-03-17 18:49:08.731439] 2025-03-17T18:49:08.7319415Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:08.7321433Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_minifier.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:08.731788] 2025-03-17T18:49:10.2377750Z Running dynamo/test_reorder_logs 1/1 ... [2025-03-17 18:49:10.237348] 2025-03-17T18:49:10.2378628Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:10.2380459Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_reorder_logs.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:10.237657] 2025-03-17T18:49:12.0834752Z 2025-03-17T18:49:12.0836560Z dynamo/test_minifier 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_minifier_1.1_800608f6df1664b7_.log 2025-03-17T18:49:12.0838182Z 2025-03-17T18:49:12.6271047Z 2025-03-17T18:49:12.6272369Z dynamo/test_ctx_manager 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_ctx_manager_1.1_dd9b05d04cbb1498_.log 2025-03-17T18:49:12.6273098Z 2025-03-17T18:49:13.6441162Z 2025-03-17T18:49:13.6442660Z dynamo/test_reorder_logs 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_reorder_logs_1.1_ba71478cb07b9596_.log 2025-03-17T18:49:13.6444128Z 2025-03-17T18:49:15.7029065Z Running test_linalg 4/4 ... [2025-03-17 18:49:15.702503] 2025-03-17T18:49:15.7029748Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:15.7031517Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'test_linalg.py', '-m', 'not serial', '--shard-id=4', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:15.702857] 2025-03-17T18:49:16.2217596Z Running dynamo/test_python_autograd 1/1 ... [2025-03-17 18:49:16.221305] 2025-03-17T18:49:16.2218541Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:49:16.2222702Z Executing ['/opt/conda/envs/py_3.13/bin/python', '-bb', 'dynamo/test_python_autograd.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2025-03-17 18:49:16.221919] 2025-03-17T18:49:19.4885079Z 2025-03-17T18:49:19.4886340Z dynamo/test_python_autograd 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_python_autograd_1.1_1ca0d732324be9ed_.log 2025-03-17T18:49:19.4887117Z 2025-03-17T18:57:56.9900990Z 2025-03-17T18:57:56.9901864Z test_linalg 4/4 was successful, full logs can be found in artifacts with path test/test-reports/test_linalg_4.4_3fdcb04aeda3374a_.log 2025-03-17T18:57:57.0090175Z Running 327 items in this shard: test/test_linalg.py::TestLinalgCPU::test_1_sized_with_0_strided_cpu_float64, test/test_linalg.py::TestLinalgCPU::test__dyn_quant_matmul_4bit_m_1_k_128_n_11008_cpu, test/test_linalg.py::TestLinalgCPU::test__dyn_quant_pack_4bit_weight_k_256_n_32_cpu, test/test_linalg.py::TestLinalgCPU::test__int4_mm_m_32_k_32_n_48_cpu, test/test_linalg.py::TestLinalgCPU::test__int4_mm_m_32_k_64_n_48_cpu, test/test_linalg.py::TestLinalgCPU::test__int4_mm_m_32_k_64_n_64_cpu, test/test_linalg.py::TestLinalgCPU::test__int4_mm_m_64_k_32_n_64_cpu, test/test_linalg.py::TestLinalgCPU::test__int4_mm_m_64_k_64_n_64_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_32_k_32_n_48_compile_True_slice_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_32_k_64_n_64_compile_False_slice_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_32_k_64_n_64_compile_True_slice_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_32_n_48_compile_False_slice_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_32_n_48_compile_True_slice_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_32_n_64_compile_True_slice_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_64_n_48_compile_True_slice_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_64_n_64_compile_False_slice_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int8_mm_m_64_k_64_n_64_compile_True_slice_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_0_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_0_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_0_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_0_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_16_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_16_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_32_use_transpose_a_True_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_0_k_32_n_32_use_transpose_a_True_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_0_n_16_use_transpose_a_False_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_0_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_0_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_0_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_0_n_32_use_transpose_a_True_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_16_use_transpose_a_False_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_16_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_16_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_16_use_transpose_a_False_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_16_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_17_k_32_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_0_n_16_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_0_n_16_use_transpose_a_True_use_transpose_b_True_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_0_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_0_n_32_use_transpose_a_True_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_16_n_16_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_16_n_16_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_16_use_transpose_a_False_use_transpose_b_True_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_16_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_32_use_transpose_a_False_use_transpose_b_False_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_32_use_transpose_a_False_use_transpose_b_True_non_contig_type_1_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_0_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_cpu_m_8_k_32_n_32_use_transpose_a_True_use_transpose_b_False_non_contig_type_2_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_k_16_n_16_use_transpose_a_False_use_transpose_b_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_k_32_n_16_use_transpose_a_False_use_transpose_b_False_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_k_32_n_16_use_transpose_a_True_use_transpose_b_True_cpu, test/test_linalg.py::TestLinalgCPU::test__int_mm_k_32_n_32_use_transpose_a_False_use_transpose_b_True_cpu, test/test_linalg.py::TestLinalgCPU::test_addbmm_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addbmm_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_addmm_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_0_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_0_5_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_0_5_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_0_5_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_0_beta_1_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_2_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_2_beta_0_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_2_beta_0_5_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_0_2_beta_0_5_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_1_0_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_1_0_beta_0_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_False_alpha_1_0_beta_1_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_0_0_beta_0_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_0_0_beta_0_5_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_0_0_beta_0_5_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_0_0_beta_1_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_0_2_beta_1_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_1_0_beta_0_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_False_transpose_b_True_alpha_1_0_beta_1_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_0_beta_0_5_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_0_beta_0_5_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_0_beta_0_5_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_2_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_2_beta_0_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_0_2_beta_1_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_1_0_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_1_0_beta_0_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_1_0_beta_0_5_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_1_0_beta_1_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_False_alpha_1_0_beta_1_0_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_0_beta_0_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_0_beta_0_5_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_2_beta_0_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_2_beta_0_5_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_2_beta_1_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_0_2_beta_1_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_1_0_beta_0_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_1_0_beta_0_0_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_addmm_mv_transpose_a_True_transpose_b_True_alpha_1_0_beta_1_0_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_relu_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_addmm_relu_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addmv_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_addr_float_and_complex_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_addr_float_and_complex_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_addr_float_and_complex_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_addr_integral_cpu_int64, test/test_linalg.py::TestLinalgCPU::test_addr_integral_cpu_int8, test/test_linalg.py::TestLinalgCPU::test_baddbmm_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_baddbmm_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_baddbmm_input_dtypes_compatibility_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_baddbmm_input_dtypes_compatibility_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_baddbmm_input_dtypes_compatibility_cpu_int32, test/test_linalg.py::TestLinalgCPU::test_blas_alpha_beta_empty_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_blas_nan_out_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_bmm_cpu_bfloat16, test/test_linalg.py::TestLinalgCPU::test_bmm_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_broadcast_fused_matmul_cpu, test/test_linalg.py::TestLinalgCPU::test_cholesky_errors_and_warnings_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_cholesky_ex_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cholesky_ex_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_cholesky_ex_non_pd_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_cholesky_inverse_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cholesky_inverse_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_cholesky_inverse_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_broadcasting_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_broadcasting_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_broadcasting_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_many_batches_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_batched_many_batches_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_cholesky_solve_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_compile_dyn_quant_matmul_4bit_m_1_k_64_n_4096_cpu, test/test_linalg.py::TestLinalgCPU::test_compile_int4_mm_m_32_k_32_n_48_cpu, test/test_linalg.py::TestLinalgCPU::test_compile_int4_mm_m_64_k_64_n_64_cpu, test/test_linalg.py::TestLinalgCPU::test_cond_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_cond_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_corner_cases_of_cublasltmatmul_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_corner_cases_of_cublasltmatmul_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_corner_cases_of_cublasltmatmul_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_corner_cases_of_cublasltmatmul_cpu_int32, test/test_linalg.py::TestLinalgCPU::test_corner_cases_of_cublasltmatmul_cpu_int8, test/test_linalg.py::TestLinalgCPU::test_cross_with_and_without_dim_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_cross_with_and_without_dim_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_det_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_det_logdet_slogdet_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_disable_tuning_tunableop_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_dot_invalid_args_cpu, test/test_linalg.py::TestLinalgCPU::test_dot_vs_numpy_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_eig_check_magma_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_eig_compare_backends_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_eig_compare_backends_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_eig_errors_and_warnings_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_eig_with_nan_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_eigh_errors_and_warnings_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_eigh_lower_uplo_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_eigh_lower_uplo_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_eigh_lwork_lapack_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_eigvals_compare_backends_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_eigvals_errors_and_warnings_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_eigvals_errors_and_warnings_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_eigvalsh_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_eigvalsh_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_eigvalsh_errors_and_warnings_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_einsum_corner_cases_cpu, test/test_linalg.py::TestLinalgCPU::test_einsum_random_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_einsum_sublist_format_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_fp16_mv_transposed_first_argument_arm_cpu_m_35_k_36_cpu, test/test_linalg.py::TestLinalgCPU::test_fp16_mv_transposed_first_argument_arm_cpu_m_35_k_40_cpu, test/test_linalg.py::TestLinalgCPU::test_fp16_mv_transposed_first_argument_arm_cpu_m_35_k_64_cpu, test/test_linalg.py::TestLinalgCPU::test_fp16_mv_transposed_first_argument_arm_cpu_m_36_k_64_cpu, test/test_linalg.py::TestLinalgCPU::test_fp16_mv_transposed_first_argument_arm_cpu_m_64_k_64_cpu, test/test_linalg.py::TestLinalgCPU::test_householder_product_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_householder_product_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_inv_errors_and_warnings_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_inv_ex_info_device_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_inv_ex_singular_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_inv_ex_singular_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_inverse_errors_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_inverse_errors_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_kron_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_kron_empty_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_kron_errors_and_warnings_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_kron_errors_and_warnings_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_large_bmm_backward_cpu, test/test_linalg.py::TestLinalgCPU::test_ldl_factor_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_ldl_solve_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_linalg_lstsq_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_linalg_lstsq_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_linalg_lstsq_input_checks_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_linalg_lstsq_input_checks_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_linalg_lu_cpu_errors_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_linalg_lu_family_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_linalg_lu_solve_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_analytic_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_boundary_cases_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_perverse_nan_values_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_perverse_nan_values_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_perverse_nan_values_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_linalg_matrix_exp_utils_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_linalg_solve_triangular_broadcasting_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_linalg_solve_triangular_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_linalg_solve_triangular_large_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_linalg_solve_triangular_large_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_lobpcg_basic_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_lower_precision_accumulation_with_ref_path_cpu, test/test_linalg.py::TestLinalgCPU::test_lu_solve_batched_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_lu_solve_batched_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_lu_solve_batched_many_batches_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_lu_solve_batched_many_batches_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_lu_solve_batched_many_batches_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_lu_solve_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_lu_solve_large_matrices_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_lu_solve_large_matrices_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_matmul_out_kernel_errors_with_autograd_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_matmul_small_brute_force_2d_Nd_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_matmul_small_brute_force_2d_Nd_cpu_int64, test/test_linalg.py::TestLinalgCPU::test_matmul_small_brute_force_3d_Nd_cpu_int64, test/test_linalg.py::TestLinalgCPU::test_matrix_rank_atol_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_matrix_rank_atol_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_matrix_rank_basic_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_matrix_rank_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_matrix_rank_removed_error_cpu, test/test_linalg.py::TestLinalgCPU::test_mm_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_multi_dot_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_multi_dot_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_norm_complexhalf_cpu, test/test_linalg.py::TestLinalgCPU::test_norm_dtype_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_norm_dtype_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_norm_errors_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_norm_matrix_degenerate_shapes_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_norm_vector_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_norm_vector_degenerate_shapes_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_nuclear_norm_exceptions_old_cpu, test/test_linalg.py::TestLinalgCPU::test_ormqr_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_ormqr_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_outer_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_outer_cpu_int16, test/test_linalg.py::TestLinalgCPU::test_outer_cpu_int32, test/test_linalg.py::TestLinalgCPU::test_outer_cpu_int8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_bfloat16_bool, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_bfloat16_float32, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_bool_float64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_bool_int64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_bool_uint8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_complex128_int16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_complex128_uint8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_complex64_complex64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_complex64_float32, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float16_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float16_bool, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float16_float64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float16_int16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_complex128, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_complex64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_float16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_float32, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_float64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_int16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float32_uint8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float64_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float64_int64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_float64_uint8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int16_bool, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int16_float32, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int16_float64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int16_int16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int32_int8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int64_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int64_bool, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int64_complex128, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int64_complex64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int8_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int8_complex64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int8_float32, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int8_float64, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_int8_uint8, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_uint8_bfloat16, test/test_linalg.py::TestLinalgCPU::test_outer_type_promotion_cpu_uint8_uint8, test/test_linalg.py::TestLinalgCPU::test_pinv_errors_and_warnings_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_pinv_errors_and_warnings_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_qr_batched_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_qr_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_qr_error_cases_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_qr_vs_numpy_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_scaled_gemm_tunableop_cpu_float8_e4m3fnuz, test/test_linalg.py::TestLinalgCPU::test_slogdet_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_slogdet_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_slogdet_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_slogdet_errors_and_warnings_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_solve_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_svd_lowrank_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_symeig_removed_error_cpu, test/test_linalg.py::TestLinalgCPU::test_tensorinv_errors_and_warnings_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_tensorsolve_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_tensorsolve_empty_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_triangular_solve_batched_broadcasting_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_triangular_solve_cpu_complex64, test/test_linalg.py::TestLinalgCPU::test_triangular_solve_out_errors_and_warnings_cpu_float64, test/test_linalg.py::TestLinalgCPU::test_vdot_vs_numpy_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_vector_norm_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_vector_norm_cpu_float16, test/test_linalg.py::TestLinalgCPU::test_vector_norm_cpu_float32, test/test_linalg.py::TestLinalgCPU::test_vector_norm_reduce_over_1D_vector_cpu_complex128, test/test_linalg.py::TestLinalgCPU::test_vector_norm_reduce_over_1D_vector_cpu_float32 2025-03-17T18:57:57.0206507Z 2025-03-17T18:57:57.7500929Z Running test batch 'tests to run' cost 4256.88 seconds 2025-03-17T18:57:58.5535659Z 2025-03-17T18:57:58.5536392Z real 71m2.456s 2025-03-17T18:57:58.5536727Z user 86m12.607s 2025-03-17T18:57:58.5537141Z sys 8m51.531s 2025-03-17T18:57:58.5537401Z + assert_git_not_dirty 2025-03-17T18:57:58.5537731Z + [[ linux-focal-py3.13-clang10 != *rocm* ]] 2025-03-17T18:57:58.5538144Z + [[ linux-focal-py3.13-clang10 != *xla* ]] 2025-03-17T18:57:58.5542263Z ++ git status --porcelain 2025-03-17T18:57:58.5543125Z ++ grep -v '?? third_party' 2025-03-17T18:58:34.5057504Z ++ true 2025-03-17T18:58:34.5058048Z + git_status= 2025-03-17T18:58:34.5060437Z + [[ -n '' ]] 2025-03-17T18:58:34.5060802Z + [[ 1 == 1 ]] 2025-03-17T18:58:34.5061099Z + test_aten 2025-03-17T18:58:34.5061546Z + echo 'Running ATen tests with pytorch lib' 2025-03-17T18:58:34.5062046Z Running ATen tests with pytorch lib 2025-03-17T18:58:34.5062579Z + [[ -n '' ]] 2025-03-17T18:58:34.5063065Z + echo 'Running test with the build folder' 2025-03-17T18:58:34.5063640Z Running test with the build folder 2025-03-17T18:58:34.5064035Z + TEST_BASE_DIR=build/bin 2025-03-17T18:58:34.5064541Z + ln -sf /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libc10.so build/bin 2025-03-17T18:58:34.5088472Z + ln -sf '/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libcaffe2*' build/bin 2025-03-17T18:58:34.5098166Z + ln -sf '/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libmkldnn*' build/bin 2025-03-17T18:58:34.5107243Z + ln -sf '/opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libnccl*' build/bin 2025-03-17T18:58:34.5118510Z + ln -sf /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libtorch.so /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libtorch_cpu.so /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libtorch_global_deps.so /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libtorch_python.so /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/lib/libtorchbind_test.so build/bin 2025-03-17T18:58:34.5125872Z + ls build/bin 2025-03-17T18:58:34.5163746Z BackoffTest cpu_generator_test 2025-03-17T18:58:34.5164545Z CMakeFiles cpu_profiling_allocator_test 2025-03-17T18:58:34.5165206Z CTestTestfile.cmake cpu_rng_test 2025-03-17T18:58:34.5165875Z CppSignature_test dispatch_key_set_test 2025-03-17T18:58:34.5166573Z Dict_test dlconvertor_test 2025-03-17T18:58:34.5167142Z Dimname_test example_allreduce 2025-03-17T18:58:34.5167785Z FileStoreTest extension_backend_test 2025-03-17T18:58:34.5168434Z HashStoreTest half_test 2025-03-17T18:58:34.5169036Z IListRef_test inline_container_test 2025-03-17T18:58:34.5169498Z KernelFunction_test ivalue_test 2025-03-17T18:58:34.5169863Z List_test kernel_function_legacy_test 2025-03-17T18:58:34.5170239Z Makefile kernel_function_test 2025-03-17T18:58:34.5170625Z MaybeOwned_test kernel_lambda_legacy_test 2025-03-17T18:58:34.5171163Z NamedTensor_test kernel_lambda_test 2025-03-17T18:58:34.5171587Z ProcessGroupGlooTest kernel_stackbased_test 2025-03-17T18:58:34.5172007Z StorageUtils_test lazy_tensor_test 2025-03-17T18:58:34.5172374Z TCPStoreTest legacy_vmap_test 2025-03-17T18:58:34.5172730Z aot_model_compiler_test libc10.so 2025-03-17T18:58:34.5173095Z apply_utils_test 'libcaffe2*' 2025-03-17T18:58:34.5173427Z atest 'libmkldnn*' 2025-03-17T18:58:34.5173746Z backend_fallback_test 'libnccl*' 2025-03-17T18:58:34.5174084Z basic libtorch.so 2025-03-17T18:58:34.5174398Z broadcast_test libtorch_cpu.so 2025-03-17T18:58:34.5174783Z c10_ArrayRef_test libtorch_global_deps.so 2025-03-17T18:58:34.5175178Z c10_Bitset_test libtorch_python.so 2025-03-17T18:58:34.5175617Z c10_CompileTimeFunctionPointer_test libtorchbind_test.so 2025-03-17T18:58:34.5176150Z c10_ConstexprCrc_test make_boxed_from_unboxed_functor_test 2025-03-17T18:58:34.5176634Z c10_DeadlockDetection_test math_kernel_test 2025-03-17T18:58:34.5177034Z c10_DeviceGuard_test memory_format_test 2025-03-17T18:58:34.5177431Z c10_Device_test memory_overlapping_test 2025-03-17T18:58:34.5177845Z c10_DispatchKeySet_test mobile_memory_cleanup 2025-03-17T18:58:34.5178236Z c10_Half_test native_test 2025-03-17T18:58:34.5178607Z c10_InlineDeviceGuard_test op_allowlist_test 2025-03-17T18:58:34.5179073Z c10_InlineStreamGuard_test op_registration_test 2025-03-17T18:58:34.5179495Z c10_LeftRight_test operator_name_test 2025-03-17T18:58:34.5179889Z c10_Metaprogramming_test operators_test 2025-03-17T18:58:34.5180438Z c10_NetworkFlow_test packedtensoraccessor_test 2025-03-17T18:58:34.5180857Z c10_Scalar_test parallel_benchmark 2025-03-17T18:58:34.5181230Z c10_SizesAndStrides_test pow_test 2025-03-17T18:58:34.5181587Z c10_StreamGuard_test protoc 2025-03-17T18:58:34.5181929Z c10_SymInt_test protoc-3.13.0.0 2025-03-17T18:58:34.5182298Z c10_Synchronized_test quantized_test 2025-03-17T18:58:34.5182677Z c10_ThreadLocal_test reduce_ops_test 2025-03-17T18:58:34.5183076Z c10_TypeIndex_test reportMemoryUsage_test 2025-03-17T18:58:34.5183478Z c10_TypeList_test scalar_tensor_test 2025-03-17T18:58:34.5183887Z c10_TypeTraits_test scalar_test 2025-03-17T18:58:34.5184262Z c10_accumulate_test static_runtime_bench 2025-03-17T18:58:34.5184655Z c10_bfloat16_test static_runtime_test 2025-03-17T18:58:34.5185043Z c10_bit_cast_test stride_properties_test 2025-03-17T18:58:34.5185455Z c10_complex_math_test tensor_iterator_test 2025-03-17T18:58:34.5185826Z c10_complex_test test_api 2025-03-17T18:58:34.5186153Z c10_cow_test test_cpp_rpc 2025-03-17T18:58:34.5186590Z c10_error_test test_dist_autograd 2025-03-17T18:58:34.5186994Z c10_exception_test test_edge_op_registration 2025-03-17T18:58:34.5187388Z c10_flags_test test_jit 2025-03-17T18:58:34.5187719Z c10_generic_math_test test_lazy 2025-03-17T18:58:34.5188104Z c10_intrusive_ptr_benchmark test_mobile_nnc 2025-03-17T18:58:34.5188514Z c10_intrusive_ptr_test test_parallel 2025-03-17T18:58:34.5188887Z c10_irange_test test_tensorexpr 2025-03-17T18:58:34.5189242Z c10_lazy_test thread_init_test 2025-03-17T18:58:34.5189599Z c10_logging_test torch_shm_manager 2025-03-17T18:58:34.5189985Z c10_optional_test tutorial_tensorexpr 2025-03-17T18:58:34.5190401Z c10_ordered_preserving_dict_test type_ptr_test 2025-03-17T18:58:34.5190797Z c10_registry_test type_test 2025-03-17T18:58:34.5191168Z c10_small_vector_test undefined_tensor_test 2025-03-17T18:58:34.5191576Z c10_ssize_test vec_test_all_types_AVX2 2025-03-17T18:58:34.5192067Z c10_string_util_test vec_test_all_types_AVX512 2025-03-17T18:58:34.5192561Z c10_string_view_test vec_test_all_types_DEFAULT 2025-03-17T18:58:34.5193090Z c10_tempfile_test verify_api_visibility 2025-03-17T18:58:34.5193512Z c10_typeid_test weakref_test 2025-03-17T18:58:34.5193859Z cmake_install.cmake wrapdim_test 2025-03-17T18:58:34.5194225Z cpu_allocator_test xla_tensor_test 2025-03-17T18:58:34.5194586Z + aten/tools/run_tests.sh build/bin 2025-03-17T18:58:34.5194918Z + set -e 2025-03-17T18:58:34.5195173Z ++ dirname aten/tools/run_tests.sh 2025-03-17T18:58:34.5195607Z + VALGRIND_SUP=/var/lib/jenkins/workspace/aten/tools/valgrind.sup 2025-03-17T18:58:34.5196061Z + export CPP_TESTS_DIR=build/bin 2025-03-17T18:58:34.5196384Z + CPP_TESTS_DIR=build/bin 2025-03-17T18:58:34.5196665Z + VALGRIND=ON 2025-03-17T18:58:34.5198511Z + python test/run_test.py --cpp --verbose -i cpp/basic cpp/atest cpp/scalar_test cpp/broadcast_test cpp/wrapdim_test cpp/apply_utils_test cpp/dlconvertor_test cpp/native_test cpp/scalar_tensor_test cpp/undefined_tensor_test cpp/extension_backend_test cpp/lazy_tensor_test cpp/tensor_iterator_test cpp/Dimname_test cpp/Dict_test cpp/NamedTensor_test cpp/cpu_generator_test cpp/legacy_vmap_test cpp/operators_test 2025-03-17T18:58:34.6198192Z /var/lib/jenkins/workspace/test/run_test.py:24: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html 2025-03-17T18:58:34.6199137Z import pkg_resources 2025-03-17T18:58:38.2715332Z Downloading https://ossci-metrics.s3.amazonaws.com/disabled-tests-condensed.json to /var/lib/jenkins/workspace/test/.pytorch-disabled-tests.json 2025-03-17T18:58:38.2886467Z Found test times from artifacts 2025-03-17T18:58:38.3578192Z Found test times from artifacts 2025-03-17T18:58:38.3602443Z Running all tests 2025-03-17T18:58:38.3607025Z Running parallel tests on 3 processes 2025-03-17T18:58:38.3608645Z Name: tests to run (est. time: 0.0min) 2025-03-17T18:58:38.3609222Z Serial tests (0): 2025-03-17T18:58:38.3609604Z Parallel tests (19): 2025-03-17T18:58:38.3610021Z cpp/Dict_test 1/1 2025-03-17T18:58:38.3610469Z cpp/Dimname_test 1/1 2025-03-17T18:58:38.3610924Z cpp/NamedTensor_test 1/1 2025-03-17T18:58:38.3611410Z cpp/apply_utils_test 1/1 2025-03-17T18:58:38.3611865Z cpp/atest 1/1 2025-03-17T18:58:38.3612228Z cpp/basic 1/1 2025-03-17T18:58:38.3612624Z cpp/broadcast_test 1/1 2025-03-17T18:58:38.3613226Z cpp/cpu_generator_test 1/1 2025-03-17T18:58:38.3613705Z cpp/dlconvertor_test 1/1 2025-03-17T18:58:38.3614190Z cpp/extension_backend_test 1/1 2025-03-17T18:58:38.3614710Z cpp/lazy_tensor_test 1/1 2025-03-17T18:58:38.3615193Z cpp/legacy_vmap_test 1/1 2025-03-17T18:58:38.3615674Z cpp/native_test 1/1 2025-03-17T18:58:38.3616086Z cpp/operators_test 1/1 2025-03-17T18:58:38.3616592Z cpp/scalar_tensor_test 1/1 2025-03-17T18:58:38.3617100Z cpp/scalar_test 1/1 2025-03-17T18:58:38.3617594Z cpp/tensor_iterator_test 1/1 2025-03-17T18:58:38.3618186Z cpp/undefined_tensor_test 1/1 2025-03-17T18:58:38.3618675Z cpp/wrapdim_test 1/1 2025-03-17T18:58:38.3619137Z Name: excluded (est. time: 0.0min) 2025-03-17T18:58:38.3619642Z Serial tests (0): 2025-03-17T18:58:38.3620073Z Parallel tests (0): 2025-03-17T18:58:38.3670134Z Running cpp/Dict_test 1/1 ... [2025-03-17 18:58:38.366729] 2025-03-17T18:58:38.3670907Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:38.3677607Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/Dict_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-e05e7b670386672f.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:38.367351] 2025-03-17T18:58:40.4369866Z 2025-03-17T18:58:40.4371506Z cpp/Dict_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.Dict_test_1.1_adfc084bd540d27f_.log 2025-03-17T18:58:40.4372624Z 2025-03-17T18:58:40.4372973Z Running cpp/Dimname_test 1/1 ... [2025-03-17 18:58:40.436700] 2025-03-17T18:58:40.4373693Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:40.4376087Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/Dimname_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-90a27e40f94f8cfa.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:40.437146] 2025-03-17T18:58:42.3551840Z 2025-03-17T18:58:42.3553183Z cpp/Dimname_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.Dimname_test_1.1_f14063a1943e5e01_.log 2025-03-17T18:58:42.3553846Z 2025-03-17T18:58:42.3554074Z Running cpp/NamedTensor_test 1/1 ... [2025-03-17 18:58:42.355052] 2025-03-17T18:58:42.3554570Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:42.3556509Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/NamedTensor_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-fc2c33c5a1f01529.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:42.355395] 2025-03-17T18:58:43.8721258Z 2025-03-17T18:58:43.8722475Z cpp/NamedTensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.NamedTensor_test_1.1_4caf204a8569b243_.log 2025-03-17T18:58:43.8723182Z 2025-03-17T18:58:43.8723410Z Running cpp/apply_utils_test 1/1 ... [2025-03-17 18:58:43.871963] 2025-03-17T18:58:43.8724059Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:43.8726548Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/apply_utils_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-88d25be6a3b5c539.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:43.872311] 2025-03-17T18:58:45.4393271Z 2025-03-17T18:58:45.4394601Z cpp/apply_utils_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.apply_utils_test_1.1_1bd8bb4becef262d_.log 2025-03-17T18:58:45.4395381Z 2025-03-17T18:58:45.4395567Z Running cpp/atest 1/1 ... [2025-03-17 18:58:45.439214] 2025-03-17T18:58:45.4396032Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:45.4398250Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/atest', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-2d14aacec60bd4d6.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:45.439586] 2025-03-17T18:58:47.0063506Z 2025-03-17T18:58:47.0064329Z cpp/atest 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.atest_1.1_440dc7a1d94d53a1_.log 2025-03-17T18:58:47.0064940Z 2025-03-17T18:58:47.0065803Z Running cpp/basic 1/1 ... [2025-03-17 18:58:47.006241] 2025-03-17T18:58:47.0066250Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:47.0068865Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/basic', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-9969723ef84e7d06.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:47.006638] 2025-03-17T18:58:48.5734318Z 2025-03-17T18:58:48.5735265Z cpp/basic 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.basic_1.1_623085591d4cd470_.log 2025-03-17T18:58:48.5735933Z 2025-03-17T18:58:48.5736156Z Running cpp/broadcast_test 1/1 ... [2025-03-17 18:58:48.573275] 2025-03-17T18:58:48.5736679Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:48.5738531Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/broadcast_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-91d38a8d7f26e4d2.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:48.573618] 2025-03-17T18:58:50.0905229Z 2025-03-17T18:58:50.0906151Z cpp/broadcast_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.broadcast_test_1.1_73d049865ff5cebe_.log 2025-03-17T18:58:50.0906910Z 2025-03-17T18:58:50.0908693Z Running cpp/cpu_generator_test 1/1 ... [2025-03-17 18:58:50.090376] 2025-03-17T18:58:50.0909156Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:50.0910958Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/cpu_generator_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-4211a01e309ce218.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:50.090719] 2025-03-17T18:58:51.6075323Z 2025-03-17T18:58:51.6076866Z cpp/cpu_generator_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.cpu_generator_test_1.1_5fd57f72ec6f8b38_.log 2025-03-17T18:58:51.6078023Z 2025-03-17T18:58:51.6078391Z Running cpp/dlconvertor_test 1/1 ... [2025-03-17 18:58:51.607412] 2025-03-17T18:58:51.6079103Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:51.6081315Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/dlconvertor_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-b1f11aae3993aff9.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:51.607782] 2025-03-17T18:58:53.1744504Z 2025-03-17T18:58:53.1745646Z cpp/dlconvertor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.dlconvertor_test_1.1_232afac92c651d52_.log 2025-03-17T18:58:53.1746410Z 2025-03-17T18:58:53.1746671Z Running cpp/extension_backend_test 1/1 ... [2025-03-17 18:58:53.174323] 2025-03-17T18:58:53.1747249Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:53.1749030Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/extension_backend_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-423739c2488d52e8.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:53.174670] 2025-03-17T18:58:54.7416987Z 2025-03-17T18:58:54.7418047Z cpp/extension_backend_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.extension_backend_test_1.1_eaaaf9f2e58ebc7f_.log 2025-03-17T18:58:54.7419022Z 2025-03-17T18:58:54.7419255Z Running cpp/lazy_tensor_test 1/1 ... [2025-03-17 18:58:54.741507] 2025-03-17T18:58:54.7419759Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:54.7421133Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/lazy_tensor_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-e989fabe23703622.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:54.741827] 2025-03-17T18:58:56.2586212Z 2025-03-17T18:58:56.2587496Z cpp/lazy_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.lazy_tensor_test_1.1_752bccdf12d03d29_.log 2025-03-17T18:58:56.2588230Z 2025-03-17T18:58:56.2588466Z Running cpp/legacy_vmap_test 1/1 ... [2025-03-17 18:58:56.258479] 2025-03-17T18:58:56.2588921Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:56.2590782Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/legacy_vmap_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-50dc84e934c4a63f.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:56.258811] 2025-03-17T18:58:57.8255201Z 2025-03-17T18:58:57.8256185Z cpp/legacy_vmap_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.legacy_vmap_test_1.1_28d5c62bed339262_.log 2025-03-17T18:58:57.8256914Z 2025-03-17T18:58:57.8257124Z Running cpp/native_test 1/1 ... [2025-03-17 18:58:57.825376] 2025-03-17T18:58:57.8257688Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:57.8258914Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/native_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-2f59ef7984b296b0.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:57.825676] 2025-03-17T18:58:59.3424002Z 2025-03-17T18:58:59.3424935Z cpp/native_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.native_test_1.1_a21da3d0639afa37_.log 2025-03-17T18:58:59.3425896Z 2025-03-17T18:58:59.3426334Z Running cpp/operators_test 1/1 ... [2025-03-17 18:58:59.342227] 2025-03-17T18:58:59.3427068Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:58:59.3428780Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/operators_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-767ba6440a31ad74.xml', '-x', '--reruns=2'] ... [2025-03-17 18:58:59.342567] 2025-03-17T18:59:00.9094954Z 2025-03-17T18:59:00.9096425Z cpp/operators_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.operators_test_1.1_b0503f72b39d318a_.log 2025-03-17T18:59:00.9097255Z 2025-03-17T18:59:00.9097513Z Running cpp/scalar_tensor_test 1/1 ... [2025-03-17 18:59:00.909345] 2025-03-17T18:59:00.9098189Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:00.9099843Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/scalar_tensor_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-841876b03e8a8cc4.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:00.909676] 2025-03-17T18:59:02.4264311Z 2025-03-17T18:59:02.4265449Z cpp/scalar_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.scalar_tensor_test_1.1_0027000a2db0e1ba_.log 2025-03-17T18:59:02.4266495Z 2025-03-17T18:59:02.4266716Z Running cpp/scalar_test 1/1 ... [2025-03-17 18:59:02.426118] 2025-03-17T18:59:02.4267148Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:02.4268527Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/scalar_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-a173831d4e0e46da.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:02.426456] 2025-03-17T18:59:03.9935143Z 2025-03-17T18:59:03.9936279Z cpp/scalar_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.scalar_test_1.1_e3101926528e47f2_.log 2025-03-17T18:59:03.9937196Z 2025-03-17T18:59:03.9937504Z Running cpp/tensor_iterator_test 1/1 ... [2025-03-17 18:59:03.993347] 2025-03-17T18:59:03.9938022Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:03.9939700Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/tensor_iterator_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-83c169a7ef82a84e.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:03.993709] 2025-03-17T18:59:05.5604279Z 2025-03-17T18:59:05.5605304Z cpp/tensor_iterator_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.tensor_iterator_test_1.1_21403fac721819bd_.log 2025-03-17T18:59:05.5606187Z 2025-03-17T18:59:05.5606450Z Running cpp/undefined_tensor_test 1/1 ... [2025-03-17 18:59:05.560321] 2025-03-17T18:59:05.5606920Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:05.5609033Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/undefined_tensor_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-af931f32d7365db6.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:05.560665] 2025-03-17T18:59:07.1276069Z 2025-03-17T18:59:07.1277071Z cpp/undefined_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.undefined_tensor_test_1.1_839e42e72f815f50_.log 2025-03-17T18:59:07.1277818Z 2025-03-17T18:59:07.1278015Z Running cpp/wrapdim_test 1/1 ... [2025-03-17 18:59:07.127453] 2025-03-17T18:59:07.1278496Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:07.1280305Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/wrapdim_test', '-m', 'serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-b279ac93b8c80fed.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:07.127807] 2025-03-17T18:59:08.6949232Z 2025-03-17T18:59:08.6950416Z cpp/wrapdim_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.wrapdim_test_1.1_2df29b19c01e1e91_.log 2025-03-17T18:59:08.6951132Z 2025-03-17T18:59:08.6960721Z Running cpp/Dict_test 1/1 ... [2025-03-17 18:59:08.695775] 2025-03-17T18:59:08.6961534Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:08.6963759Z Running cpp/Dimname_test 1/1 ... [2025-03-17 18:59:08.695949] 2025-03-17T18:59:08.6964554Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:08.6965216Z Running cpp/NamedTensor_test 1/1 ... [2025-03-17 18:59:08.696001] 2025-03-17T18:59:08.6965829Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:08.6967529Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/Dict_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-736a192722d72b96.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:08.696345] 2025-03-17T18:59:08.6970226Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/Dimname_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-f7ea9390b4924196.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:08.696487] 2025-03-17T18:59:08.6972521Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/NamedTensor_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-b931def11029e837.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:08.696626] 2025-03-17T18:59:12.4692694Z 2025-03-17T18:59:12.4694368Z cpp/Dimname_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.Dimname_test_1.1_8e0ecc54b70fb6cf_.log 2025-03-17T18:59:12.4695770Z 2025-03-17T18:59:13.5707106Z 2025-03-17T18:59:13.5708621Z cpp/NamedTensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.NamedTensor_test_1.1_d74a4df2b7f291c7_.log 2025-03-17T18:59:13.5709974Z 2025-03-17T18:59:16.4596891Z Running cpp/apply_utils_test 1/1 ... [2025-03-17 18:59:16.459222] 2025-03-17T18:59:16.4597739Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:16.4601300Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/apply_utils_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-84f286a1635376ee.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:16.459702] 2025-03-17T18:59:17.6199225Z Running cpp/atest 1/1 ... [2025-03-17 18:59:17.619454] 2025-03-17T18:59:17.6200415Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:17.6208169Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/atest', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-0cb5d3498dc191d5.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:17.619995] 2025-03-17T18:59:19.8013931Z 2025-03-17T18:59:19.8015227Z cpp/Dict_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.Dict_test_1.1_4498a5c43ec6aefe_.log 2025-03-17T18:59:19.8016474Z 2025-03-17T18:59:20.1944411Z 2025-03-17T18:59:20.1945960Z cpp/apply_utils_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.apply_utils_test_1.1_0274eb5f1f9a5372_.log 2025-03-17T18:59:20.1947345Z 2025-03-17T18:59:22.8945508Z 2025-03-17T18:59:22.8946532Z cpp/atest 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.atest_1.1_31d075fb0d3e5701_.log 2025-03-17T18:59:22.8947198Z 2025-03-17T18:59:23.4365261Z Running cpp/basic 1/1 ... [2025-03-17 18:59:23.436158] 2025-03-17T18:59:23.4365720Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:23.4369435Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/basic', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-95be225b9fba206b.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:23.436622] 2025-03-17T18:59:23.9942673Z Running cpp/broadcast_test 1/1 ... [2025-03-17 18:59:23.993790] 2025-03-17T18:59:23.9943471Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:23.9948019Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/broadcast_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-939ab5c76bb12ce7.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:23.994261] 2025-03-17T18:59:26.5567714Z 2025-03-17T18:59:26.5568772Z cpp/basic 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.basic_1.1_91e00bd5baa3c395_.log 2025-03-17T18:59:26.5569630Z 2025-03-17T18:59:26.7638087Z 2025-03-17T18:59:26.7639309Z cpp/broadcast_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.broadcast_test_1.1_d40531619fcbdfbb_.log 2025-03-17T18:59:26.7639981Z 2025-03-17T18:59:26.9441455Z Running cpp/cpu_generator_test 1/1 ... [2025-03-17 18:59:26.943656] 2025-03-17T18:59:26.9441959Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:26.9444507Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/cpu_generator_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-990aec42497ca142.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:26.944167] 2025-03-17T18:59:30.2990094Z Running cpp/dlconvertor_test 1/1 ... [2025-03-17 18:59:30.298598] 2025-03-17T18:59:30.2990847Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:30.2994860Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/dlconvertor_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-07196d1b3157ee36.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:30.299052] 2025-03-17T18:59:30.5273621Z Running cpp/extension_backend_test 1/1 ... [2025-03-17 18:59:30.526958] 2025-03-17T18:59:30.5274880Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:30.5279468Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/extension_backend_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-8ee32a9e5b9ed6f9.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:30.527394] 2025-03-17T18:59:32.0678072Z 2025-03-17T18:59:32.0679502Z cpp/cpu_generator_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.cpu_generator_test_1.1_38eee4cb1fd2cf06_.log 2025-03-17T18:59:32.0680761Z 2025-03-17T18:59:32.9683804Z 2025-03-17T18:59:32.9685611Z cpp/dlconvertor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.dlconvertor_test_1.1_766054119d161978_.log 2025-03-17T18:59:32.9686864Z 2025-03-17T18:59:33.2969912Z 2025-03-17T18:59:33.2971355Z cpp/extension_backend_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.extension_backend_test_1.1_3209a34cedc3b7c8_.log 2025-03-17T18:59:33.2972242Z 2025-03-17T18:59:35.7919271Z Running cpp/lazy_tensor_test 1/1 ... [2025-03-17 18:59:35.791511] 2025-03-17T18:59:35.7920146Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:35.7922962Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/lazy_tensor_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-b8808f85550161f9.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:35.791935] 2025-03-17T18:59:36.4570678Z Running cpp/legacy_vmap_test 1/1 ... [2025-03-17 18:59:36.456560] 2025-03-17T18:59:36.4571462Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:36.4577199Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/legacy_vmap_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-26fae44af71b680a.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:36.457140] 2025-03-17T18:59:36.8652674Z Running cpp/native_test 1/1 ... [2025-03-17 18:59:36.864720] 2025-03-17T18:59:36.8653512Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:36.8658062Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/native_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-09e314c872b0f4da.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:36.865175] 2025-03-17T18:59:38.5118533Z 2025-03-17T18:59:38.5120015Z cpp/lazy_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.lazy_tensor_test_1.1_cae01356be7237e8_.log 2025-03-17T18:59:38.5121357Z 2025-03-17T18:59:39.7853081Z 2025-03-17T18:59:39.7854875Z cpp/native_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.native_test_1.1_64c89cfdf0dd9242_.log 2025-03-17T18:59:39.7856036Z 2025-03-17T18:59:42.3672867Z Running cpp/operators_test 1/1 ... [2025-03-17 18:59:42.366883] 2025-03-17T18:59:42.3673389Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:42.3676646Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/operators_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-6c22bc4b82cb7e7a.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:42.367377] 2025-03-17T18:59:42.5830902Z 2025-03-17T18:59:42.5832436Z cpp/legacy_vmap_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.legacy_vmap_test_1.1_0d5197896e6b3ed5_.log 2025-03-17T18:59:42.5833765Z 2025-03-17T18:59:43.6534069Z Running cpp/scalar_tensor_test 1/1 ... [2025-03-17 18:59:43.652999] 2025-03-17T18:59:43.6536557Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:43.6541733Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/scalar_tensor_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-ca27a87cacfdcdda.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:43.653450] 2025-03-17T18:59:45.1886577Z 2025-03-17T18:59:45.1888111Z cpp/operators_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.operators_test_1.1_eb608413b0d638fe_.log 2025-03-17T18:59:45.1889348Z 2025-03-17T18:59:46.2728575Z 2025-03-17T18:59:46.2730180Z cpp/scalar_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.scalar_tensor_test_1.1_02b4a34badab858f_.log 2025-03-17T18:59:46.2731159Z 2025-03-17T18:59:46.7139432Z Running cpp/scalar_test 1/1 ... [2025-03-17 18:59:46.713540] 2025-03-17T18:59:46.7139937Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:46.7142905Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/scalar_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-eafceec4f8a44ffa.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:46.713997] 2025-03-17T18:59:48.9317925Z Running cpp/tensor_iterator_test 1/1 ... [2025-03-17 18:59:48.931325] 2025-03-17T18:59:48.9318831Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:48.9321746Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/tensor_iterator_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-a070f88e32d22444.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:48.931805] 2025-03-17T18:59:49.5336080Z 2025-03-17T18:59:49.5337497Z cpp/scalar_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.scalar_test_1.1_4497784467721f3d_.log 2025-03-17T18:59:49.5338568Z 2025-03-17T18:59:50.0455036Z Running cpp/undefined_tensor_test 1/1 ... [2025-03-17 18:59:50.045032] 2025-03-17T18:59:50.0458065Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:50.0460285Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/undefined_tensor_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-940d2c50996bce03.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:50.045544] 2025-03-17T18:59:52.6147471Z 2025-03-17T18:59:52.6148882Z cpp/undefined_tensor_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.undefined_tensor_test_1.1_96280f652688d6de_.log 2025-03-17T18:59:52.6150390Z 2025-03-17T18:59:53.6973211Z Running cpp/wrapdim_test 1/1 ... [2025-03-17 18:59:53.696914] 2025-03-17T18:59:53.6973920Z SCRIBE_GRAPHQL_ACCESS_TOKEN is set 2025-03-17T18:59:53.6978127Z Executing ['pytest', '/var/lib/jenkins/workspace/build/bin/wrapdim_test', '-m', 'not serial', '-v', '-vv', '-rfEX', '-n', '3', '--junit-xml-reruns', 'test-reports/python-pytest/test.run_test/test.run_test-bc6db8ab503c6547.xml', '-x', '--reruns=2'] ... [2025-03-17 18:59:53.697410] 2025-03-17T18:59:56.6173524Z 2025-03-17T18:59:56.6174860Z cpp/wrapdim_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.wrapdim_test_1.1_3ca6cd9b9df2d8b0_.log 2025-03-17T18:59:56.6176144Z 2025-03-17T19:00:00.9180222Z 2025-03-17T19:00:00.9181517Z cpp/tensor_iterator_test 1/1 was successful, full logs can be found in artifacts with path test/test-reports/cpp.tensor_iterator_test_1.1_6f25ec091ffb470b_.log 2025-03-17T19:00:00.9182356Z 2025-03-17T19:00:01.6889220Z Running test batch 'tests to run' cost 83.33 seconds 2025-03-17T19:00:02.3035214Z + run_if_exists tensor_interop_test 2025-03-17T19:00:02.3035903Z + local test_name=tensor_interop_test 2025-03-17T19:00:02.3036406Z + [[ -x build/bin/tensor_interop_test ]] 2025-03-17T19:00:02.3037438Z + echo 'Warning: tensor_interop_test does not exist.' 2025-03-17T19:00:02.3038109Z Warning: tensor_interop_test does not exist. 2025-03-17T19:00:02.3038837Z + run_if_exists cudnn_test 2025-03-17T19:00:02.3039396Z + local test_name=cudnn_test 2025-03-17T19:00:02.3039981Z + [[ -x build/bin/cudnn_test ]] 2025-03-17T19:00:02.3040510Z + echo 'Warning: cudnn_test does not exist.' 2025-03-17T19:00:02.3041197Z Warning: cudnn_test does not exist. 2025-03-17T19:00:02.3041724Z + run_if_exists cuda_generator_test 2025-03-17T19:00:02.3042100Z + local test_name=cuda_generator_test 2025-03-17T19:00:02.3042502Z + [[ -x build/bin/cuda_generator_test ]] 2025-03-17T19:00:02.3042972Z + echo 'Warning: cuda_generator_test does not exist.' 2025-03-17T19:00:02.3043491Z Warning: cuda_generator_test does not exist. 2025-03-17T19:00:02.3043923Z + run_if_exists apply_test 2025-03-17T19:00:02.3044229Z + local test_name=apply_test 2025-03-17T19:00:02.3044598Z + [[ -x build/bin/apply_test ]] 2025-03-17T19:00:02.3044936Z + echo 'Warning: apply_test does not exist.' 2025-03-17T19:00:02.3045350Z Warning: apply_test does not exist. 2025-03-17T19:00:02.3045683Z + run_if_exists stream_test 2025-03-17T19:00:02.3046036Z + local test_name=stream_test 2025-03-17T19:00:02.3046349Z + [[ -x build/bin/stream_test ]] 2025-03-17T19:00:02.3046730Z + echo 'Warning: stream_test does not exist.' 2025-03-17T19:00:02.3047107Z Warning: stream_test does not exist. 2025-03-17T19:00:02.3047483Z + run_if_exists cuda_half_test 2025-03-17T19:00:02.3047807Z + local test_name=cuda_half_test 2025-03-17T19:00:02.3048130Z + [[ -x build/bin/cuda_half_test ]] 2025-03-17T19:00:02.3048545Z + echo 'Warning: cuda_half_test does not exist.' 2025-03-17T19:00:02.3048942Z Warning: cuda_half_test does not exist. 2025-03-17T19:00:02.3049336Z + run_if_exists cuda_vectorized_test 2025-03-17T19:00:02.3049682Z + local test_name=cuda_vectorized_test 2025-03-17T19:00:02.3050088Z + [[ -x build/bin/cuda_vectorized_test ]] 2025-03-17T19:00:02.3050508Z + echo 'Warning: cuda_vectorized_test does not exist.' 2025-03-17T19:00:02.3050958Z Warning: cuda_vectorized_test does not exist. 2025-03-17T19:00:02.3051391Z + run_if_exists cuda_distributions_test 2025-03-17T19:00:02.3051740Z + local test_name=cuda_distributions_test 2025-03-17T19:00:02.3052166Z + [[ -x build/bin/cuda_distributions_test ]] 2025-03-17T19:00:02.3052587Z + echo 'Warning: cuda_distributions_test does not exist.' 2025-03-17T19:00:02.3053164Z Warning: cuda_distributions_test does not exist. 2025-03-17T19:00:02.3053583Z + run_if_exists cuda_optional_test 2025-03-17T19:00:02.3053944Z + local test_name=cuda_optional_test 2025-03-17T19:00:02.3054307Z + [[ -x build/bin/cuda_optional_test ]] 2025-03-17T19:00:02.3054740Z + echo 'Warning: cuda_optional_test does not exist.' 2025-03-17T19:00:02.3055212Z Warning: cuda_optional_test does not exist. 2025-03-17T19:00:02.3055588Z + run_if_exists cuda_tensor_interop_test 2025-03-17T19:00:02.3056006Z + local test_name=cuda_tensor_interop_test 2025-03-17T19:00:02.3056379Z + [[ -x build/bin/cuda_tensor_interop_test ]] 2025-03-17T19:00:02.3056855Z + echo 'Warning: cuda_tensor_interop_test does not exist.' 2025-03-17T19:00:02.3064786Z Warning: cuda_tensor_interop_test does not exist. 2025-03-17T19:00:02.3065260Z + run_if_exists cuda_complex_test 2025-03-17T19:00:02.3065601Z + local test_name=cuda_complex_test 2025-03-17T19:00:02.3065993Z + [[ -x build/bin/cuda_complex_test ]] 2025-03-17T19:00:02.3066461Z + echo 'Warning: cuda_complex_test does not exist.' 2025-03-17T19:00:02.3066933Z Warning: cuda_complex_test does not exist. 2025-03-17T19:00:02.3067325Z + run_if_exists cuda_complex_math_test 2025-03-17T19:00:02.3067724Z + local test_name=cuda_complex_math_test 2025-03-17T19:00:02.3068108Z + [[ -x build/bin/cuda_complex_math_test ]] 2025-03-17T19:00:02.3068550Z + echo 'Warning: cuda_complex_math_test does not exist.' 2025-03-17T19:00:02.3069037Z Warning: cuda_complex_math_test does not exist. 2025-03-17T19:00:02.3069416Z + run_if_exists cuda_cub_test 2025-03-17T19:00:02.3069781Z + local test_name=cuda_cub_test 2025-03-17T19:00:02.3070227Z + [[ -x build/bin/cuda_cub_test ]] 2025-03-17T19:00:02.3070683Z + echo 'Warning: cuda_cub_test does not exist.' 2025-03-17T19:00:02.3071066Z Warning: cuda_cub_test does not exist. 2025-03-17T19:00:02.3071478Z + run_if_exists cuda_atomic_ops_test 2025-03-17T19:00:02.3071821Z + local test_name=cuda_atomic_ops_test 2025-03-17T19:00:02.3072232Z + [[ -x build/bin/cuda_atomic_ops_test ]] 2025-03-17T19:00:02.3072632Z + echo 'Warning: cuda_atomic_ops_test does not exist.' 2025-03-17T19:00:02.3073097Z Warning: cuda_atomic_ops_test does not exist. 2025-03-17T19:00:02.3073446Z + '[' ON == ON ']' 2025-03-17T19:00:02.3074166Z + valgrind --suppressions=/var/lib/jenkins/workspace/aten/tools/valgrind.sup --error-exitcode=1 build/bin/basic '--gtest_filter=-*CUDA' 2025-03-17T19:00:02.3369436Z ==5794== Memcheck, a memory error detector 2025-03-17T19:00:02.3369929Z ==5794== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al. 2025-03-17T19:00:02.3370515Z ==5794== Using Valgrind-3.20.0 and LibVEX; rerun with -h for copyright info 2025-03-17T19:00:02.3371054Z ==5794== Command: build/bin/basic --gtest_filter=-*CUDA 2025-03-17T19:00:02.3371437Z ==5794== 2025-03-17T19:00:02.8394004Z ==5794== Warning: set address range perms: large range [0x4a08000, 0x15b1c000) (defined) 2025-03-17T19:00:31.3004061Z Running main() from /var/lib/jenkins/workspace/third_party/googletest/googletest/src/gtest_main.cc 2025-03-17T19:00:31.3310750Z Note: Google Test filter = -*CUDA 2025-03-17T19:00:31.3357604Z [==========] Running 4 tests from 1 test suite. 2025-03-17T19:00:31.3384403Z [----------] Global test environment set-up. 2025-03-17T19:00:31.3454844Z [----------] 4 tests from BasicTest 2025-03-17T19:00:31.3478754Z [ RUN ] BasicTest.BasicTestCPU 2025-03-17T19:00:32.7636529Z 377 ms 2025-03-17T19:00:32.8474563Z 53 ms 2025-03-17T19:00:32.9208087Z 65 ms 2025-03-17T19:00:33.5433497Z [ OK ] BasicTest.BasicTestCPU (2193 ms) 2025-03-17T19:00:33.5442857Z [ RUN ] BasicTest.BasicTestHalfCPU 2025-03-17T19:00:33.6695286Z 81 ms 2025-03-17T19:00:33.7193012Z 45 ms 2025-03-17T19:00:33.7877239Z 66 ms 2025-03-17T19:00:33.8405401Z [ OK ] BasicTest.BasicTestHalfCPU (295 ms) 2025-03-17T19:00:33.8407177Z [ RUN ] BasicTest.FactoryMethodsTest 2025-03-17T19:00:33.8730708Z [ OK ] BasicTest.FactoryMethodsTest (32 ms) 2025-03-17T19:00:33.8731137Z [ RUN ] BasicTest.BasicStdTestCPU 2025-03-17T19:00:33.9564267Z Simple example: called once 2025-03-17T19:00:34.0473075Z throw: call_once will retry 2025-03-17T19:00:34.0487308Z Didn't throw, call_once will not attempt again 2025-03-17T19:00:34.0912883Z [ OK ] BasicTest.BasicStdTestCPU (218 ms) 2025-03-17T19:00:34.0935871Z [----------] 4 tests from BasicTest (2744 ms total) 2025-03-17T19:00:34.0936183Z 2025-03-17T19:00:34.0950014Z [----------] Global test environment tear-down 2025-03-17T19:00:34.0981312Z [==========] 4 tests from 1 test suite ran. (2770 ms total) 2025-03-17T19:00:34.0992172Z [ PASSED ] 4 tests. 2025-03-17T19:00:35.8711787Z ==5794== 2025-03-17T19:00:35.8716052Z ==5794== HEAP SUMMARY: 2025-03-17T19:00:35.8716466Z ==5794== in use at exit: 240,152 bytes in 3,996 blocks 2025-03-17T19:00:35.8717048Z ==5794== total heap usage: 754,418 allocs, 750,422 frees, 215,823,397 bytes allocated 2025-03-17T19:00:35.8717567Z ==5794== 2025-03-17T19:00:35.9096281Z ==5794== LEAK SUMMARY: 2025-03-17T19:00:35.9096782Z ==5794== definitely lost: 0 bytes in 0 blocks 2025-03-17T19:00:35.9097268Z ==5794== indirectly lost: 0 bytes in 0 blocks 2025-03-17T19:00:35.9097816Z ==5794== possibly lost: 0 bytes in 0 blocks 2025-03-17T19:00:35.9098606Z ==5794== still reachable: 240,152 bytes in 3,996 blocks 2025-03-17T19:00:35.9099053Z ==5794== suppressed: 0 bytes in 0 blocks 2025-03-17T19:00:35.9099520Z ==5794== Rerun with --leak-check=full to see details of leaked memory 2025-03-17T19:00:35.9099935Z ==5794== 2025-03-17T19:00:35.9100282Z ==5794== For lists of detected and suppressed errors, rerun with: -s 2025-03-17T19:00:35.9100832Z ==5794== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) 2025-03-17T19:00:35.9516749Z + [[ -x build/bin/tensor_interop_test ]] 2025-03-17T19:00:35.9518863Z + [[ -n '' ]] 2025-03-17T19:00:35.9519175Z + assert_git_not_dirty 2025-03-17T19:00:35.9519490Z + [[ linux-focal-py3.13-clang10 != *rocm* ]] 2025-03-17T19:00:35.9519873Z + [[ linux-focal-py3.13-clang10 != *xla* ]] 2025-03-17T19:00:35.9525444Z ++ git status --porcelain 2025-03-17T19:00:35.9526242Z ++ grep -v '?? third_party' 2025-03-17T19:00:36.1750236Z ++ true 2025-03-17T19:00:36.1750923Z + git_status= 2025-03-17T19:00:36.1751207Z + [[ -n '' ]] 2025-03-17T19:00:36.1753173Z + cleanup_workspace 2025-03-17T19:00:36.1753952Z + echo 'sudo may print the following warning message that can be ignored. The chown command will still run.' 2025-03-17T19:00:36.1755001Z sudo may print the following warning message that can be ignored. The chown command will still run. 2025-03-17T19:00:36.1755674Z + echo ' sudo: setrlimit(RLIMIT_STACK): Operation not permitted' 2025-03-17T19:00:36.1756188Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted 2025-03-17T19:00:36.1756777Z + echo 'For more details refer to https://github.com/sudo-project/sudo/issues/42' 2025-03-17T19:00:36.1757415Z For more details refer to https://github.com/sudo-project/sudo/issues/42 2025-03-17T19:00:36.1757925Z + sudo chown -R 1000 /var/lib/jenkins/workspace 2025-03-17T19:00:39.1561167Z ##[group]Run pytorch/test-infra/.github/actions/upload-benchmark-results@main 2025-03-17T19:00:39.1561670Z with: 2025-03-17T19:00:39.1562047Z benchmark-results-dir: test/test-reports 2025-03-17T19:00:39.1562642Z dry-run: false 2025-03-17T19:00:39.1563097Z schema-version: v3 2025-03-17T19:00:39.1563579Z github-token: *** 2025-03-17T19:00:39.1563833Z env: 2025-03-17T19:00:39.1564074Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:39.1564569Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:39.1565128Z ##[endgroup] 2025-03-17T19:00:39.1594677Z ##[group]Run set -eux 2025-03-17T19:00:39.1594986Z set -eux 2025-03-17T19:00:39.1595288Z python3 -mpip install boto3==1.35.33 2025-03-17T19:00:39.1604589Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:39.1605007Z env: 2025-03-17T19:00:39.1605237Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:39.1605775Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:39.1606302Z ##[endgroup] 2025-03-17T19:00:39.1634104Z + python3 -mpip install boto3==1.35.33 2025-03-17T19:00:39.4234777Z Defaulting to user installation because normal site-packages is not writeable 2025-03-17T19:00:40.4772163Z Collecting boto3==1.35.33 2025-03-17T19:00:40.5002173Z Downloading boto3-1.35.33-py3-none-any.whl (139 kB) 2025-03-17T19:00:40.5203253Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.33) (0.10.0) 2025-03-17T19:00:41.6924210Z Collecting botocore<1.36.0,>=1.35.33 2025-03-17T19:00:41.6959249Z Downloading botocore-1.35.99-py3-none-any.whl (13.3 MB) 2025-03-17T19:00:41.8744602Z Collecting s3transfer<0.11.0,>=0.10.0 2025-03-17T19:00:41.8774363Z Downloading s3transfer-0.10.4-py3-none-any.whl (83 kB) 2025-03-17T19:00:41.8863193Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (2.8.1) 2025-03-17T19:00:41.8873684Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.25.10) 2025-03-17T19:00:42.1241141Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.15.0) 2025-03-17T19:00:42.2196795Z Installing collected packages: botocore, s3transfer, boto3 2025-03-17T19:00:42.7536043Z Successfully installed boto3-1.35.33 botocore-1.35.99 s3transfer-0.10.4 2025-03-17T19:00:42.8517088Z ##[group]Run set -eux 2025-03-17T19:00:42.8517497Z set -eux 2025-03-17T19:00:42.8517752Z  2025-03-17T19:00:42.8518020Z if [[ -z "${GITHUB_TOKEN}" ]]; then 2025-03-17T19:00:42.8518405Z  echo "Missing github-token input" 2025-03-17T19:00:42.8518756Z  exit 1 2025-03-17T19:00:42.8519005Z fi 2025-03-17T19:00:42.8525420Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:42.8525829Z env: 2025-03-17T19:00:42.8526059Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:42.8526549Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:42.8527426Z GITHUB_TOKEN: *** 2025-03-17T19:00:42.8527775Z ##[endgroup] 2025-03-17T19:00:42.8558265Z + [[ -z *** ]] 2025-03-17T19:00:42.8612107Z ##[group]Run pytorch/test-infra/.github/actions/get-workflow-job-id@main 2025-03-17T19:00:42.8612592Z with: 2025-03-17T19:00:42.8612965Z github-token: *** 2025-03-17T19:00:42.8613223Z env: 2025-03-17T19:00:42.8613464Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:42.8613975Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:42.8614504Z ##[endgroup] 2025-03-17T19:00:42.8639944Z ##[group]Run set -eux 2025-03-17T19:00:42.8640246Z set -eux 2025-03-17T19:00:42.8640505Z  2025-03-17T19:00:42.8641163Z python3 "${GITHUB_ACTION_PATH}/../../scripts/get_workflow_job_id.py" "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-03-17T19:00:42.8646637Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:42.8647051Z env: 2025-03-17T19:00:42.8647287Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:42.8647781Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:42.8648562Z GITHUB_TOKEN: *** 2025-03-17T19:00:42.8648813Z ##[endgroup] 2025-03-17T19:00:42.8671220Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/get-workflow-job-id/../../scripts/get_workflow_job_id.py 13905937446 i-0287a0cab9cae3fa7 2025-03-17T19:00:45.6478621Z setting job-id=38909654187 2025-03-17T19:00:45.6479199Z setting job-name=linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T19:00:45.6595029Z ##[group]Run set -eux 2025-03-17T19:00:45.6595352Z set -eux 2025-03-17T19:00:45.6595603Z  2025-03-17T19:00:45.6596048Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_metadata.py" \ 2025-03-17T19:00:45.6596607Z  --schema-version "${SCHEMA_VERSION}" \ 2025-03-17T19:00:45.6596983Z  --repo "${REPO}" \ 2025-03-17T19:00:45.6597313Z  --head-branch "${HEAD_BRANCH}" \ 2025-03-17T19:00:45.6597687Z  --head-sha "${HEAD_SHA}" \ 2025-03-17T19:00:45.6598052Z  --workflow-id "${WORKFLOW_RUN_ID}" \ 2025-03-17T19:00:45.6598436Z  --run-attempt "${RUN_ATTEMPT}" \ 2025-03-17T19:00:45.6598793Z  --job-id "${JOB_ID}" \ 2025-03-17T19:00:45.6599120Z  --job-name "${JOB_NAME}" 2025-03-17T19:00:45.6604841Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.6605256Z env: 2025-03-17T19:00:45.6605499Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.6605997Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.6606528Z SCHEMA_VERSION: v3 2025-03-17T19:00:45.6606819Z REPO: pytorch/pytorch 2025-03-17T19:00:45.6607116Z HEAD_BRANCH: gh/fadara01/5/head 2025-03-17T19:00:45.6607484Z HEAD_SHA: 52b86900e894e6b34d880548ab6883b3d9207fb6 2025-03-17T19:00:45.6607861Z WORKFLOW_RUN_ID: 13905937446 2025-03-17T19:00:45.6608161Z RUN_ATTEMPT: 1 2025-03-17T19:00:45.6608423Z JOB_ID: 38909654187 2025-03-17T19:00:45.6608870Z JOB_NAME: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T19:00:45.6609372Z ##[endgroup] 2025-03-17T19:00:45.6634127Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_metadata.py --schema-version v3 --repo pytorch/pytorch --head-branch gh/fadara01/5/head --head-sha 52b86900e894e6b34d880548ab6883b3d9207fb6 --workflow-id 13905937446 --run-attempt 1 --job-id 38909654187 --job-name 'linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)' 2025-03-17T19:00:45.6950501Z ##[group]Run set -eux 2025-03-17T19:00:45.6950817Z set -eux 2025-03-17T19:00:45.6951071Z  2025-03-17T19:00:45.6951346Z # TODO (huydhn): Implement this part 2025-03-17T19:00:45.6951745Z echo "runners=[]" >> "${GITHUB_OUTPUT}" 2025-03-17T19:00:45.6957201Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.6957702Z env: 2025-03-17T19:00:45.6957943Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.6958432Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.6958955Z ##[endgroup] 2025-03-17T19:00:45.6995314Z + echo 'runners=[]' 2025-03-17T19:00:45.7024837Z ##[group]Run set -eux 2025-03-17T19:00:45.7025135Z set -eux 2025-03-17T19:00:45.7025390Z  2025-03-17T19:00:45.7025747Z # TODO (huydhn): Implement this part 2025-03-17T19:00:45.7026173Z echo "dependencies={}" >> "${GITHUB_OUTPUT}" 2025-03-17T19:00:45.7031648Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.7032218Z env: 2025-03-17T19:00:45.7032453Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.7032945Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.7033478Z ##[endgroup] 2025-03-17T19:00:45.7054087Z + echo 'dependencies={}' 2025-03-17T19:00:45.7083928Z ##[group]Run set -eux 2025-03-17T19:00:45.7084233Z set -eux 2025-03-17T19:00:45.7084482Z  2025-03-17T19:00:45.7084765Z if [[ ! -d "${BENCHMARK_RESULTS_DIR}" ]]; then 2025-03-17T19:00:45.7085236Z  echo "${BENCHMARK_RESULTS_DIR} does not exist, skipping" 2025-03-17T19:00:45.7085756Z  # We don't want the job to fail if the directory doesn't exist 2025-03-17T19:00:45.7086181Z  exit 0 2025-03-17T19:00:45.7086422Z fi 2025-03-17T19:00:45.7086650Z  2025-03-17T19:00:45.7086904Z if [[ "${DRY_RUN}" == "true" ]]; then 2025-03-17T19:00:45.7087420Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-03-17T19:00:45.7088007Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-03-17T19:00:45.7088459Z  --metadata "${BENCHMARK_METADATA}" \ 2025-03-17T19:00:45.7088835Z  --runners "${RUNNER_INFO}" \ 2025-03-17T19:00:45.7089202Z  --dependencies "${DEPENDENCIES}" \ 2025-03-17T19:00:45.7089563Z  --dry-run 2025-03-17T19:00:45.7089833Z else 2025-03-17T19:00:45.7090230Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-03-17T19:00:45.7090818Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-03-17T19:00:45.7091274Z  --metadata "${BENCHMARK_METADATA}" \ 2025-03-17T19:00:45.7091650Z  --runners "${RUNNER_INFO}" \ 2025-03-17T19:00:45.7092018Z  --dependencies "${DEPENDENCIES}" 2025-03-17T19:00:45.7092357Z fi 2025-03-17T19:00:45.7097316Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.7097725Z env: 2025-03-17T19:00:45.7097958Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.7098435Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.7098995Z BENCHMARK_RESULTS_DIR: test/test-reports 2025-03-17T19:00:45.7099334Z DRY_RUN: false 2025-03-17T19:00:45.7100710Z BENCHMARK_METADATA: {"timestamp": 1742238045, "schema_version": "v3", "name": "linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)", "repo": "pytorch/pytorch", "head_branch": "gh/fadara01/5/head", "head_sha": "52b86900e894e6b34d880548ab6883b3d9207fb6", "workflow_id": 13905937446, "run_attempt": 1, "job_id": 38909654187} 2025-03-17T19:00:45.7102233Z RUNNER_INFO: [] 2025-03-17T19:00:45.7102494Z DEPENDENCIES: {} 2025-03-17T19:00:45.7102749Z ##[endgroup] 2025-03-17T19:00:45.7122552Z + [[ ! -d test/test-reports ]] 2025-03-17T19:00:45.7123118Z + [[ false == \t\r\u\e ]] 2025-03-17T19:00:45.7125807Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/upload_benchmark_results.py --benchmark-results-dir test/test-reports --metadata '{"timestamp": 1742238045, "schema_version": "v3", "name": "linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)", "repo": "pytorch/pytorch", "head_branch": "gh/fadara01/5/head", "head_sha": "52b86900e894e6b34d880548ab6883b3d9207fb6", "workflow_id": 13905937446, "run_attempt": 1, "job_id": 38909654187}' --runners '[]' --dependencies '{}' 2025-03-17T19:00:45.8913581Z ##[group]Run cat test/**/*_toprint.log || true 2025-03-17T19:00:45.8914009Z cat test/**/*_toprint.log || true 2025-03-17T19:00:45.8919607Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.8920020Z env: 2025-03-17T19:00:45.8920265Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.8920757Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.8921276Z ##[endgroup] 2025-03-17T19:00:45.8990722Z cat: 'test/**/*_toprint.log': No such file or directory 2025-03-17T19:00:45.9027918Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2025-03-17T19:00:45.9028297Z kill "$MONITOR_SCRIPT_PID" 2025-03-17T19:00:45.9033579Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:45.9034003Z env: 2025-03-17T19:00:45.9034242Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:45.9034716Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:45.9035252Z MONITOR_SCRIPT_PID: 98234 2025-03-17T19:00:45.9035542Z ##[endgroup] 2025-03-17T19:00:45.9181300Z Prepare all required actions 2025-03-17T19:00:45.9181777Z Getting action download info 2025-03-17T19:00:46.0758420Z Download action repository 'actions/upload-artifact@v4' (SHA:4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1) 2025-03-17T19:00:46.5208965Z ##[group]Run ./.github/actions/upload-test-artifacts 2025-03-17T19:00:46.5209356Z with: 2025-03-17T19:00:46.5209710Z file-suffix: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T19:00:46.5210150Z s3-bucket: gha-artifacts 2025-03-17T19:00:46.5210436Z env: 2025-03-17T19:00:46.5210669Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:46.5211155Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:46.5211685Z ##[endgroup] 2025-03-17T19:00:46.5243051Z ##[group]Run # Remove any previous test jsons if they exist 2025-03-17T19:00:46.5243540Z # Remove any previous test jsons if they exist 2025-03-17T19:00:46.5243956Z rm -f test-jsons-*.zip 2025-03-17T19:00:46.5244466Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test/test-reports -i '*.json' 2025-03-17T19:00:46.5250197Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:46.5250607Z env: 2025-03-17T19:00:46.5250853Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:46.5251344Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:46.5251976Z FILE_SUFFIX: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T19:00:46.5252398Z ##[endgroup] 2025-03-17T19:00:46.5379114Z adding: test/test-reports/td_exclusions-189e4fbb808540573735.json (deflated 81%) 2025-03-17T19:00:46.5379828Z adding: test/test-reports/td_exclusions-3d413a75f121d9914549.json (deflated 73%) 2025-03-17T19:00:46.5406785Z ##[group]Run # Remove any previous test reports if they exist 2025-03-17T19:00:46.5407278Z # Remove any previous test reports if they exist 2025-03-17T19:00:46.5407690Z rm -f test-reports-*.zip 2025-03-17T19:00:46.5408207Z zip -r "test-reports-${FILE_SUFFIX}.zip" test/test-reports -i '*.xml' -i '*.csv' 2025-03-17T19:00:46.5413918Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:46.5414314Z env: 2025-03-17T19:00:46.5414542Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:46.5415024Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:46.5415648Z FILE_SUFFIX: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T19:00:46.5416064Z ##[endgroup] 2025-03-17T19:00:46.5503527Z adding: test/test-reports/python-pytest/test_native_mha/test_native_mha-ca656b647021a8b1.xml (deflated 95%) 2025-03-17T19:00:46.5504621Z adding: test/test-reports/python-pytest/test_transformers_privateuse1/test_transformers_privateuse1-7b260af18f3f523f.xml (deflated 71%) 2025-03-17T19:00:46.5505842Z adding: test/test-reports/python-pytest/test_show_pickle/test_show_pickle-ae33fcb7aa1750fb.xml (deflated 37%) 2025-03-17T19:00:46.5842030Z adding: test/test-reports/python-pytest/test_torch/test_torch-f61814fc0f95e70c.xml (deflated 98%) 2025-03-17T19:00:46.5850381Z adding: test/test-reports/python-pytest/test_fake_tensor/test_fake_tensor-1221861b7d762278.xml (deflated 94%) 2025-03-17T19:00:46.5852222Z adding: test/test-reports/python-pytest/test_jit_disabled/test_jit_disabled-f8d61d2693321413.xml (deflated 56%) 2025-03-17T19:00:46.5853815Z adding: test/test-reports/python-pytest/test_autocast/test_autocast-bda1db66be53e26e.xml (deflated 92%) 2025-03-17T19:00:46.5860416Z adding: test/test-reports/python-pytest/test_python_dispatch/test_python_dispatch-0e9eabbae6d22a41.xml (deflated 94%) 2025-03-17T19:00:46.5862481Z adding: test/test-reports/python-pytest/test_cpp_extensions_mtia_backend/test_cpp_extensions_mtia_backend-c73fba05d9ae5f87.xml (deflated 79%) 2025-03-17T19:00:46.5864657Z adding: test/test-reports/python-pytest/test_autograd_fallback/test_autograd_fallback-e469bd114ccf9fc5.xml (deflated 88%) 2025-03-17T19:00:46.5872111Z adding: test/test-reports/python-pytest/test_multiprocessing/test_multiprocessing-c401c886ba0e2c27.xml (deflated 96%) 2025-03-17T19:00:46.5874278Z adding: test/test-reports/python-pytest/test_cpp_extensions_stream_and_event/test_cpp_extensions_stream_and_event-38b8aa6d5731322b.xml (deflated 59%) 2025-03-17T19:00:46.6174009Z adding: test/test-reports/python-pytest/test_tensor_creation_ops/test_tensor_creation_ops-4231c3fa06761f13.xml (deflated 97%) 2025-03-17T19:00:46.8780065Z adding: test/test-reports/python-pytest/test_nn/test_nn-4e4e2cff9089759f.xml (deflated 91%) 2025-03-17T19:00:46.8808538Z adding: test/test-reports/python-pytest/nn.test_pooling/nn.test_pooling-e9676b0df9d2c69d.xml (deflated 98%) 2025-03-17T19:00:46.8834801Z adding: test/test-reports/python-pytest/test_overrides/test_overrides-c333575b8b00e591.xml (deflated 96%) 2025-03-17T19:00:46.8836960Z adding: test/test-reports/python-pytest/test_multiprocessing_spawn/test_multiprocessing_spawn-d40a6cba3cfeb78d.xml (deflated 94%) 2025-03-17T19:00:46.9150177Z adding: test/test-reports/python-pytest/test_reductions/test_reductions-404ed40e693e1f12.xml (deflated 98%) 2025-03-17T19:00:46.9539661Z adding: test/test-reports/python-pytest/test_reductions/test_reductions-aa090797f0a7a87e.xml (deflated 98%) 2025-03-17T19:00:46.9797888Z adding: test/test-reports/python-pytest/test_reductions/test_reductions-0edf328cc9704919.xml (deflated 98%) 2025-03-17T19:00:47.0097124Z adding: test/test-reports/python-pytest/test_reductions/test_reductions-edd3fa123ceb02cd.xml (deflated 99%) 2025-03-17T19:00:47.0137371Z adding: test/test-reports/python-pytest/distributions.test_distributions/distributions.test_distributions-b092ad71bfe20204.xml (deflated 96%) 2025-03-17T19:00:47.0179683Z adding: test/test-reports/python-pytest/distributions.test_distributions/distributions.test_distributions-694e3310569c1dcf.xml (deflated 96%) 2025-03-17T19:00:47.0183552Z adding: test/test-reports/python-pytest/test_cpp_extensions_aot_ninja/test_cpp_extensions_aot_ninja-9721d2d8d1d21d80.xml (deflated 95%) 2025-03-17T19:00:47.0187526Z adding: test/test-reports/python-pytest/test_cpp_extensions_aot_no_ninja/test_cpp_extensions_aot_no_ninja-d3969cf8a940ef6e.xml (deflated 95%) 2025-03-17T19:00:47.0188720Z adding: test/test-reports/python-pytest/test_jiterator/test_jiterator-826a6811bad268de.xml (deflated 28%) 2025-03-17T19:00:47.0189645Z adding: test/test-reports/python-pytest/test_jiterator/test_jiterator-95723c1db7bc3e4d.xml (deflated 28%) 2025-03-17T19:00:47.0190554Z adding: test/test-reports/python-pytest/xpu.test_conv/xpu.test_conv-47294ad3beb7abf1.xml (deflated 28%) 2025-03-17T19:00:47.0191453Z adding: test/test-reports/python-pytest/xpu.test_conv/xpu.test_conv-a0e740be03f4c597.xml (deflated 28%) 2025-03-17T19:00:47.0192399Z adding: test/test-reports/python-pytest/xpu.test_gemm/xpu.test_gemm-4a9e3ebe3c690b35.xml (deflated 28%) 2025-03-17T19:00:47.0193296Z adding: test/test-reports/python-pytest/xpu.test_gemm/xpu.test_gemm-1d1cd118f574f82c.xml (deflated 28%) 2025-03-17T19:00:47.0194222Z adding: test/test-reports/python-pytest/test_matmul_cuda/test_matmul_cuda-7293242550f756b5.xml (deflated 28%) 2025-03-17T19:00:47.0195161Z adding: test/test-reports/python-pytest/test_matmul_cuda/test_matmul_cuda-42d36de3e1361dd6.xml (deflated 28%) 2025-03-17T19:00:47.0196122Z adding: test/test-reports/python-pytest/test_cuda_multigpu/test_cuda_multigpu-71742e00dcf796ed.xml (deflated 28%) 2025-03-17T19:00:47.0197109Z adding: test/test-reports/python-pytest/test_cuda_multigpu/test_cuda_multigpu-6c2ac26a5aa9db32.xml (deflated 28%) 2025-03-17T19:00:47.0198031Z adding: test/test-reports/python-pytest/test_linalg/test_linalg-6f67a8711e83b195.xml (deflated 28%) 2025-03-17T19:00:47.0756841Z adding: test/test-reports/python-pytest/test_linalg/test_linalg-ab684b8f4f36b9f9.xml (deflated 98%) 2025-03-17T19:00:47.0757750Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-e05e7b670386672f.xml (deflated 29%) 2025-03-17T19:00:47.0758796Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-90a27e40f94f8cfa.xml (deflated 29%) 2025-03-17T19:00:47.0759699Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-fc2c33c5a1f01529.xml (deflated 29%) 2025-03-17T19:00:47.0760602Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-88d25be6a3b5c539.xml (deflated 29%) 2025-03-17T19:00:47.0761504Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-2d14aacec60bd4d6.xml (deflated 29%) 2025-03-17T19:00:47.0762393Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-9969723ef84e7d06.xml (deflated 29%) 2025-03-17T19:00:47.0763274Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-91d38a8d7f26e4d2.xml (deflated 29%) 2025-03-17T19:00:47.0764157Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-4211a01e309ce218.xml (deflated 29%) 2025-03-17T19:00:47.0765042Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-b1f11aae3993aff9.xml (deflated 29%) 2025-03-17T19:00:47.0765927Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-423739c2488d52e8.xml (deflated 29%) 2025-03-17T19:00:47.0766809Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-e989fabe23703622.xml (deflated 29%) 2025-03-17T19:00:47.0767693Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-50dc84e934c4a63f.xml (deflated 29%) 2025-03-17T19:00:47.0768577Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-2f59ef7984b296b0.xml (deflated 29%) 2025-03-17T19:00:47.0769469Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-767ba6440a31ad74.xml (deflated 29%) 2025-03-17T19:00:47.0770350Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-841876b03e8a8cc4.xml (deflated 29%) 2025-03-17T19:00:47.0771238Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-a173831d4e0e46da.xml (deflated 29%) 2025-03-17T19:00:47.0772127Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-83c169a7ef82a84e.xml (deflated 29%) 2025-03-17T19:00:47.0773067Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-af931f32d7365db6.xml (deflated 29%) 2025-03-17T19:00:47.0773955Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-b279ac93b8c80fed.xml (deflated 28%) 2025-03-17T19:00:47.0774845Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-f7ea9390b4924196.xml (deflated 57%) 2025-03-17T19:00:47.0775764Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-b931def11029e837.xml (deflated 72%) 2025-03-17T19:00:47.0776822Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-736a192722d72b96.xml (deflated 83%) 2025-03-17T19:00:47.0777706Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-84f286a1635376ee.xml (deflated 67%) 2025-03-17T19:00:47.0778646Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-0cb5d3498dc191d5.xml (deflated 79%) 2025-03-17T19:00:47.0779544Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-95be225b9fba206b.xml (deflated 61%) 2025-03-17T19:00:47.0780442Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-939ab5c76bb12ce7.xml (deflated 37%) 2025-03-17T19:00:47.0781327Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-990aec42497ca142.xml (deflated 80%) 2025-03-17T19:00:47.0782214Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-07196d1b3157ee36.xml (deflated 50%) 2025-03-17T19:00:47.0783098Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-8ee32a9e5b9ed6f9.xml (deflated 35%) 2025-03-17T19:00:47.0783982Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-b8808f85550161f9.xml (deflated 47%) 2025-03-17T19:00:47.0784872Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-09e314c872b0f4da.xml (deflated 47%) 2025-03-17T19:00:47.0785842Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-26fae44af71b680a.xml (deflated 83%) 2025-03-17T19:00:47.0786820Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-6c22bc4b82cb7e7a.xml (deflated 58%) 2025-03-17T19:00:47.0787725Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-ca27a87cacfdcdda.xml (deflated 58%) 2025-03-17T19:00:47.0788638Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-eafceec4f8a44ffa.xml (deflated 59%) 2025-03-17T19:00:47.0789534Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-940d2c50996bce03.xml (deflated 37%) 2025-03-17T19:00:47.0790431Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-bc6db8ab503c6547.xml (deflated 37%) 2025-03-17T19:00:47.0791320Z adding: test/test-reports/python-pytest/test.run_test/test.run_test-a070f88e32d22444.xml (deflated 90%) 2025-03-17T19:00:47.0792328Z adding: test/test-reports/python-unittest/test_autoload/TEST-TestDeviceBackendAutoload-20250317184533.xml (deflated 42%) 2025-03-17T19:00:47.0793447Z adding: test/test-reports/python-unittest/test_autoload/TEST-TestDeviceBackendAutoload-20250317184544.xml (deflated 42%) 2025-03-17T19:00:47.0846272Z ##[group]Run # Remove any previous usage logs if they exist 2025-03-17T19:00:47.0846768Z # Remove any previous usage logs if they exist 2025-03-17T19:00:47.0847169Z rm -f logs-*.zip 2025-03-17T19:00:47.0847666Z # this workflow is also run in bazel build test, but we dont generate usage reports for it 2025-03-17T19:00:47.0848246Z # so check to see if the file exists first 2025-03-17T19:00:47.0848636Z if [ -f 'usage_log.txt' ]; then 2025-03-17T19:00:47.0849033Z  zip "logs-${FILE_SUFFIX}.zip" 'usage_log.txt' 2025-03-17T19:00:47.0849403Z fi 2025-03-17T19:00:47.0849808Z if find "test/test-reports" -name "*.log" 2>/dev/null | grep -q .; then 2025-03-17T19:00:47.0850399Z  zip -r "logs-${FILE_SUFFIX}.zip" test/test-reports -i '*.log' 2025-03-17T19:00:47.0850823Z fi 2025-03-17T19:00:47.0856380Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:47.0856873Z env: 2025-03-17T19:00:47.0857114Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:47.0857603Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:47.0858237Z FILE_SUFFIX: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T19:00:47.0858659Z ##[endgroup] 2025-03-17T19:00:47.0927870Z adding: usage_log.txt (deflated 97%) 2025-03-17T19:00:47.1013070Z adding: test/test-reports/test_native_mha_1.1_b0aa962e8bd653f2_.log (deflated 90%) 2025-03-17T19:00:47.1014183Z adding: test/test-reports/test_transformers_privateuse1_1.1_539f378470681bfd_.log (deflated 65%) 2025-03-17T19:00:47.1015126Z adding: test/test-reports/test_show_pickle_1.1_b9ad8ad54d2c1d87_.log (deflated 51%) 2025-03-17T19:00:47.1055131Z adding: test/test-reports/test_torch_1.1_b84af707d22a9fa2_.log (deflated 92%) 2025-03-17T19:00:47.1062110Z adding: test/test-reports/test_fake_tensor_1.1_28139d77a1d1428b_.log (deflated 91%) 2025-03-17T19:00:47.1062868Z adding: test/test-reports/test_jit_disabled_1.1_596f15a7188a449a_.log (deflated 57%) 2025-03-17T19:00:47.1063601Z adding: test/test-reports/test_autocast_1.1_cc69e67092fa0da7_.log (deflated 77%) 2025-03-17T19:00:47.1067592Z adding: test/test-reports/test_python_dispatch_1.1_119ead219a3d25e8_.log (deflated 85%) 2025-03-17T19:00:47.1068526Z adding: test/test-reports/test_cpp_extensions_mtia_backend_1.1_44084757394ae727_.log (deflated 68%) 2025-03-17T19:00:47.1069400Z adding: test/test-reports/test_autograd_fallback_1.1_1cbaae17125864ad_.log (deflated 84%) 2025-03-17T19:00:47.1071842Z adding: test/test-reports/test_multiprocessing_1.1_61ae5ef5222789ff_.log (deflated 87%) 2025-03-17T19:00:47.1072759Z adding: test/test-reports/test_cpp_extensions_stream_and_event_1.1_41f931f8a8314995_.log (deflated 59%) 2025-03-17T19:00:47.1094913Z adding: test/test-reports/test_tensor_creation_ops_1.1_6781f75dec4cdf15_.log (deflated 93%) 2025-03-17T19:00:47.1130029Z adding: test/test-reports/test_nn_1.2_f503d55d50a7756d_.log (deflated 93%) 2025-03-17T19:00:47.1134673Z adding: test/test-reports/nn.test_pooling_1.1_a3bff201591fb55b_.log (deflated 87%) 2025-03-17T19:00:47.1161909Z adding: test/test-reports/test_overrides_1.1_fb09db68dbd55ada_.log (deflated 92%) 2025-03-17T19:00:47.1162647Z adding: test/test-reports/test_cuda_nvml_based_avail_1.1_8463a41bacee3d33_.log (deflated 11%) 2025-03-17T19:00:47.1163518Z adding: test/test-reports/test_multiprocessing_spawn_1.1_200437085c306514_.log (deflated 82%) 2025-03-17T19:00:47.1189273Z adding: test/test-reports/test_reductions_1.4_ae7deb5d9cea05b3_.log (deflated 94%) 2025-03-17T19:00:47.1214902Z adding: test/test-reports/test_reductions_2.4_9c93a165c9354117_.log (deflated 94%) 2025-03-17T19:00:47.1241635Z adding: test/test-reports/test_reductions_3.4_5dab9477c3721fdd_.log (deflated 94%) 2025-03-17T19:00:47.1265952Z adding: test/test-reports/test_reductions_4.4_ba6861575f79aea2_.log (deflated 94%) 2025-03-17T19:00:47.1269698Z adding: test/test-reports/distributions.test_distributions_1.2_edf0b4eda7cfebe2_.log (deflated 90%) 2025-03-17T19:00:47.1273030Z adding: test/test-reports/distributions.test_distributions_2.2_70e8e91b504d7941_.log (deflated 89%) 2025-03-17T19:00:47.1273959Z adding: test/test-reports/test_cpp_extensions_aot_ninja_1.1_ac77513849f8a989_.log (deflated 77%) 2025-03-17T19:00:47.1274992Z adding: test/test-reports/test_cpp_extensions_aot_no_ninja_1.1_de96bfc24b04b98a_.log (deflated 77%) 2025-03-17T19:00:47.1275969Z adding: test/test-reports/dynamo.test_dynamic_shapes_1.1_66148cf7cc91d005_.log (stored 0%) 2025-03-17T19:00:47.1276867Z adding: test/test-reports/dynamo.test_interop_1.1_ca65cc9a1e043938_.log (stored 0%) 2025-03-17T19:00:47.1277744Z adding: test/test-reports/test_appending_byte_serializer_1.1_a63dcbab84b0fa4e_.log (stored 0%) 2025-03-17T19:00:47.1278653Z adding: test/test-reports/dynamo.test_sdpa_1.1_8e7906986b35d8b0_.log (stored 0%) 2025-03-17T19:00:47.1279529Z adding: test/test-reports/dynamo.test_frame_init_1.1_c13ece9b00021916_.log (stored 0%) 2025-03-17T19:00:47.1280470Z adding: test/test-reports/dynamo.test_sys_1.1_64cb24f9e574fa3b_.log (stored 0%) 2025-03-17T19:00:47.1281245Z adding: test/test-reports/dynamo.test_trace_rules_1.1_cfad119dedf48a0d_.log (stored 0%) 2025-03-17T19:00:47.1282189Z adding: test/test-reports/dynamo.test_config_1.1_9dbd401c49538040_.log (stored 0%) 2025-03-17T19:00:47.1283025Z adding: test/test-reports/test_jiterator_1.1_cb77d15636b8a39e_.log (deflated 48%) 2025-03-17T19:00:47.1283836Z adding: test/test-reports/dynamo.test_sources_1.1_b9cfbc1c36502625_.log (stored 0%) 2025-03-17T19:00:47.1284671Z adding: test/test-reports/dynamo.test_optimizers_1.1_3a168843ab7b6642_.log (stored 0%) 2025-03-17T19:00:47.1285503Z adding: test/test-reports/dynamo.test_metrics_context_1.1_bf6b4aa32404aa0b_.log (stored 0%) 2025-03-17T19:00:47.1286353Z adding: test/test-reports/xpu.test_conv_1.1_b077d59cfa130752_.log (deflated 49%) 2025-03-17T19:00:47.1287252Z adding: test/test-reports/dynamo.test_python_dispatcher_1.1_ffe97d0e60fa0bc8_.log (stored 0%) 2025-03-17T19:00:47.1287956Z adding: test/test-reports/test_hub_1.1_2bb48e25dcbe5c38_.log (stored 0%) 2025-03-17T19:00:47.1288632Z adding: test/test-reports/dynamo.test_flat_apply_1.1_891f51f0de7e51d7_.log (deflated 39%) 2025-03-17T19:00:47.1289415Z adding: test/test-reports/xpu.test_gemm_1.1_96cd9b90a122d9e0_.log (deflated 49%) 2025-03-17T19:00:47.1290277Z adding: test/test-reports/dynamo.test_verify_correctness_1.1_8b5729137063c427_.log (stored 0%) 2025-03-17T19:00:47.1291161Z adding: test/test-reports/test_cuda_expandable_segments_1.1_9c51a554ba2820a5_.log (stored 0%) 2025-03-17T19:00:47.1292094Z adding: test/test-reports/dynamo.test_debug_utils_1.1_8c980ec79db20671_.log (stored 0%) 2025-03-17T19:00:47.1292844Z adding: test/test-reports/dynamo.test_structured_trace_1.1_75cf2d42e2d17a8a_.log (stored 0%) 2025-03-17T19:00:47.1293673Z adding: test/test-reports/test_matmul_cuda_1.1_f289c0b4ff44df5a_.log (deflated 49%) 2025-03-17T19:00:47.1294388Z adding: test/test-reports/dynamo.test_aot_autograd_1.1_0c85651929c4f40a_.log (stored 0%) 2025-03-17T19:00:47.1295140Z adding: test/test-reports/dynamo.test_higher_order_ops_1.1_f0d94ee1b03b1a8f_.log (stored 0%) 2025-03-17T19:00:47.1295914Z adding: test/test-reports/dynamo.test_aot_autograd_cache_1.1_33b64123222a7b5c_.log (stored 0%) 2025-03-17T19:00:47.1296632Z adding: test/test-reports/dynamo.test_exc_1.1_18493546b479bbe0_.log (stored 0%) 2025-03-17T19:00:47.1297320Z adding: test/test-reports/test_cuda_multigpu_1.1_558c6263cf1e8ea1_.log (deflated 49%) 2025-03-17T19:00:47.1298040Z adding: test/test-reports/dynamo.test_ctx_manager_1.1_a8596f376e0da0ce_.log (stored 0%) 2025-03-17T19:00:47.1298756Z adding: test/test-reports/dynamo.test_minifier_1.1_343af30e8b933cef_.log (stored 0%) 2025-03-17T19:00:47.1299473Z adding: test/test-reports/dynamo.test_reorder_logs_1.1_81783ea7b5f9265f_.log (stored 0%) 2025-03-17T19:00:47.1300171Z adding: test/test-reports/test_linalg_4.4_f38c51a687fd4567_.log (deflated 49%) 2025-03-17T19:00:47.1300882Z adding: test/test-reports/dynamo.test_python_autograd_1.1_e6153da66cc6ad4f_.log (stored 0%) 2025-03-17T19:00:47.1301638Z adding: test/test-reports/cpp.scalar_tensor_test_1.1_02b4a34badab858f_.log (deflated 60%) 2025-03-17T19:00:47.1302409Z adding: test/test-reports/test_appending_byte_serializer_1.1_5f9268ab8a0f92e4_.log (stored 0%) 2025-03-17T19:00:47.1303158Z adding: test/test-reports/dynamo.test_interop_1.1_cc5f5f7f74c8a699_.log (stored 0%) 2025-03-17T19:00:47.1303891Z adding: test/test-reports/dynamo.test_dynamic_shapes_1.1_02f0defef9ed58ab_.log (stored 0%) 2025-03-17T19:00:47.1304629Z adding: test/test-reports/dynamo.test_frame_init_1.1_fcab03d7021b6bc9_.log (stored 0%) 2025-03-17T19:00:47.1305330Z adding: test/test-reports/dynamo.test_sdpa_1.1_7a27abd0e2b827af_.log (stored 0%) 2025-03-17T19:00:47.1306001Z adding: test/test-reports/dynamo.test_sys_1.1_4391828934f55a76_.log (stored 0%) 2025-03-17T19:00:47.1306759Z adding: test/test-reports/dynamo.test_trace_rules_1.1_abffc84f018341c1_.log (stored 0%) 2025-03-17T19:00:47.1307496Z adding: test/test-reports/dynamo.test_config_1.1_fada092543466d71_.log (stored 0%) 2025-03-17T19:00:47.1308191Z adding: test/test-reports/test_jiterator_1.1_73e510f607dd847c_.log (deflated 48%) 2025-03-17T19:00:47.1308870Z adding: test/test-reports/cpp.scalar_test_1.1_4497784467721f3d_.log (deflated 59%) 2025-03-17T19:00:47.1309604Z adding: test/test-reports/dynamo.test_optimizers_1.1_140c1d8176c3643a_.log (stored 0%) 2025-03-17T19:00:47.1310419Z adding: test/test-reports/dynamo.test_sources_1.1_42d698941c70731f_.log (stored 0%) 2025-03-17T19:00:47.1311227Z adding: test/test-reports/dynamo.test_metrics_context_1.1_979b9d8b58e382dd_.log (stored 0%) 2025-03-17T19:00:47.1312119Z adding: test/test-reports/dynamo.test_python_dispatcher_1.1_52e8b0679f510c03_.log (stored 0%) 2025-03-17T19:00:47.1312947Z adding: test/test-reports/xpu.test_conv_1.1_12aa43f3a0447b1a_.log (deflated 49%) 2025-03-17T19:00:47.1313639Z adding: test/test-reports/test_hub_1.1_749272caad8b55de_.log (stored 0%) 2025-03-17T19:00:47.1314371Z adding: test/test-reports/cpp.legacy_vmap_test_1.1_0d5197896e6b3ed5_.log (deflated 81%) 2025-03-17T19:00:47.1315181Z adding: test/test-reports/dynamo.test_flat_apply_1.1_9601024a04ce8f52_.log (deflated 39%) 2025-03-17T19:00:47.1316003Z adding: test/test-reports/xpu.test_gemm_1.1_0ff2d3eaaf944bca_.log (deflated 49%) 2025-03-17T19:00:47.1316810Z adding: test/test-reports/dynamo.test_verify_correctness_1.1_3965078b9cf458c9_.log (stored 0%) 2025-03-17T19:00:47.1317627Z adding: test/test-reports/cpp.native_test_1.1_64c89cfdf0dd9242_.log (deflated 54%) 2025-03-17T19:00:47.1318400Z adding: test/test-reports/dynamo.test_debug_utils_1.1_39f38591d739c0a6_.log (stored 0%) 2025-03-17T19:00:47.1319161Z adding: test/test-reports/test_cuda_expandable_segments_1.1_7cb4f6e3c71443a5_.log (stored 0%) 2025-03-17T19:00:47.1320058Z adding: test/test-reports/dynamo.test_structured_trace_1.1_34635567f924920c_.log (stored 0%) 2025-03-17T19:00:47.1320876Z adding: test/test-reports/cpp.tensor_iterator_test_1.1_6f25ec091ffb470b_.log (deflated 88%) 2025-03-17T19:00:47.1321708Z adding: test/test-reports/test_matmul_cuda_1.1_ce4df283cb806ddf_.log (deflated 49%) 2025-03-17T19:00:47.1322452Z adding: test/test-reports/dynamo.test_aot_autograd_1.1_f60138f91279311e_.log (stored 0%) 2025-03-17T19:00:47.1323300Z adding: test/test-reports/dynamo.test_higher_order_ops_1.1_709c7a07840e0096_.log (stored 0%) 2025-03-17T19:00:47.1324143Z adding: test/test-reports/cpp.undefined_tensor_test_1.1_96280f652688d6de_.log (deflated 50%) 2025-03-17T19:00:47.1324906Z adding: test/test-reports/dynamo.test_exc_1.1_14ed4e83c8691eaa_.log (stored 0%) 2025-03-17T19:00:47.1325640Z adding: test/test-reports/dynamo.test_aot_autograd_cache_1.1_0acb90ccdb589c5c_.log (stored 0%) 2025-03-17T19:00:47.1326402Z adding: test/test-reports/test_cuda_multigpu_1.1_248bddab2d96d73a_.log (deflated 49%) 2025-03-17T19:00:47.1327122Z adding: test/test-reports/cpp.operators_test_1.1_eb608413b0d638fe_.log (deflated 61%) 2025-03-17T19:00:47.1328055Z adding: test/test-reports/dynamo.test_minifier_1.1_800608f6df1664b7_.log (stored 0%) 2025-03-17T19:00:47.1329342Z adding: test/test-reports/dynamo.test_ctx_manager_1.1_dd9b05d04cbb1498_.log (stored 0%) 2025-03-17T19:00:47.1330072Z adding: test/test-reports/dynamo.test_reorder_logs_1.1_ba71478cb07b9596_.log (stored 0%) 2025-03-17T19:00:47.1330818Z adding: test/test-reports/dynamo.test_python_autograd_1.1_1ca0d732324be9ed_.log (stored 0%) 2025-03-17T19:00:47.1331545Z adding: test/test-reports/test_linalg_4.4_3fdcb04aeda3374a_.log (deflated 91%) 2025-03-17T19:00:47.1332210Z adding: test/test-reports/cpp.Dict_test_1.1_adfc084bd540d27f_.log (deflated 49%) 2025-03-17T19:00:47.1332890Z adding: test/test-reports/cpp.Dimname_test_1.1_f14063a1943e5e01_.log (deflated 49%) 2025-03-17T19:00:47.1333606Z adding: test/test-reports/cpp.NamedTensor_test_1.1_4caf204a8569b243_.log (deflated 49%) 2025-03-17T19:00:47.1334382Z adding: test/test-reports/cpp.apply_utils_test_1.1_1bd8bb4becef262d_.log (deflated 49%) 2025-03-17T19:00:47.1335070Z adding: test/test-reports/cpp.atest_1.1_440dc7a1d94d53a1_.log (deflated 49%) 2025-03-17T19:00:47.1335710Z adding: test/test-reports/cpp.basic_1.1_623085591d4cd470_.log (deflated 49%) 2025-03-17T19:00:47.1336381Z adding: test/test-reports/cpp.broadcast_test_1.1_73d049865ff5cebe_.log (deflated 49%) 2025-03-17T19:00:47.1337299Z adding: test/test-reports/cpp.cpu_generator_test_1.1_5fd57f72ec6f8b38_.log (deflated 49%) 2025-03-17T19:00:47.1338042Z adding: test/test-reports/cpp.dlconvertor_test_1.1_232afac92c651d52_.log (deflated 49%) 2025-03-17T19:00:47.1338874Z adding: test/test-reports/cpp.extension_backend_test_1.1_eaaaf9f2e58ebc7f_.log (deflated 49%) 2025-03-17T19:00:47.1339628Z adding: test/test-reports/cpp.lazy_tensor_test_1.1_752bccdf12d03d29_.log (deflated 49%) 2025-03-17T19:00:47.1340357Z adding: test/test-reports/cpp.legacy_vmap_test_1.1_28d5c62bed339262_.log (deflated 49%) 2025-03-17T19:00:47.1341065Z adding: test/test-reports/cpp.native_test_1.1_a21da3d0639afa37_.log (deflated 49%) 2025-03-17T19:00:47.1341766Z adding: test/test-reports/cpp.operators_test_1.1_b0503f72b39d318a_.log (deflated 49%) 2025-03-17T19:00:47.1342492Z adding: test/test-reports/cpp.scalar_tensor_test_1.1_0027000a2db0e1ba_.log (deflated 49%) 2025-03-17T19:00:47.1343202Z adding: test/test-reports/cpp.scalar_test_1.1_e3101926528e47f2_.log (deflated 49%) 2025-03-17T19:00:47.1343922Z adding: test/test-reports/cpp.tensor_iterator_test_1.1_21403fac721819bd_.log (deflated 49%) 2025-03-17T19:00:47.1344688Z adding: test/test-reports/cpp.undefined_tensor_test_1.1_839e42e72f815f50_.log (deflated 49%) 2025-03-17T19:00:47.1345425Z adding: test/test-reports/cpp.wrapdim_test_1.1_2df29b19c01e1e91_.log (deflated 49%) 2025-03-17T19:00:47.1346121Z adding: test/test-reports/cpp.Dimname_test_1.1_8e0ecc54b70fb6cf_.log (deflated 59%) 2025-03-17T19:00:47.1347049Z adding: test/test-reports/cpp.NamedTensor_test_1.1_d74a4df2b7f291c7_.log (deflated 70%) 2025-03-17T19:00:47.1347768Z adding: test/test-reports/cpp.Dict_test_1.1_4498a5c43ec6aefe_.log (deflated 83%) 2025-03-17T19:00:47.1348471Z adding: test/test-reports/cpp.apply_utils_test_1.1_0274eb5f1f9a5372_.log (deflated 65%) 2025-03-17T19:00:47.1349154Z adding: test/test-reports/cpp.atest_1.1_31d075fb0d3e5701_.log (deflated 72%) 2025-03-17T19:00:47.1349799Z adding: test/test-reports/cpp.basic_1.1_91e00bd5baa3c395_.log (deflated 61%) 2025-03-17T19:00:47.1350484Z adding: test/test-reports/cpp.broadcast_test_1.1_d40531619fcbdfbb_.log (deflated 50%) 2025-03-17T19:00:47.1351220Z adding: test/test-reports/cpp.cpu_generator_test_1.1_38eee4cb1fd2cf06_.log (deflated 78%) 2025-03-17T19:00:47.1351957Z adding: test/test-reports/cpp.dlconvertor_test_1.1_766054119d161978_.log (deflated 56%) 2025-03-17T19:00:47.1352713Z adding: test/test-reports/cpp.extension_backend_test_1.1_3209a34cedc3b7c8_.log (deflated 50%) 2025-03-17T19:00:47.1353455Z adding: test/test-reports/cpp.wrapdim_test_1.1_3ca6cd9b9df2d8b0_.log (deflated 49%) 2025-03-17T19:00:47.1354164Z adding: test/test-reports/cpp.lazy_tensor_test_1.1_cae01356be7237e8_.log (deflated 55%) 2025-03-17T19:00:47.1381373Z ##[group]Run # Remove any previous debugging artifacts if they exist 2025-03-17T19:00:47.1381928Z # Remove any previous debugging artifacts if they exist 2025-03-17T19:00:47.1382363Z rm -f debug-*.zip 2025-03-17T19:00:47.1382676Z if [ -d 'test/debug' ]; then 2025-03-17T19:00:47.1383068Z  zip -r "debug-${FILE_SUFFIX}.zip" test/debug 2025-03-17T19:00:47.1383440Z fi 2025-03-17T19:00:47.1388883Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:47.1389303Z env: 2025-03-17T19:00:47.1389548Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:47.1390035Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:47.1390680Z FILE_SUFFIX: test-dynamo_wrapped-1-3-linux.2xlarge_38909654187 2025-03-17T19:00:47.1391176Z ##[endgroup] 2025-03-17T19:00:47.1468803Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-03-17T19:00:47.1469165Z with: 2025-03-17T19:00:47.1469410Z s3-bucket: gha-artifacts 2025-03-17T19:00:47.1469760Z s3-prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:47.1470141Z retention-days: 14 2025-03-17T19:00:47.1470416Z if-no-files-found: warn 2025-03-17T19:00:47.1470709Z path: test-jsons-*.zip 2025-03-17T19:00:47.1470987Z name: artifact 2025-03-17T19:00:47.1471237Z region: us-east-1 2025-03-17T19:00:47.1471483Z env: 2025-03-17T19:00:47.1471716Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:47.1472202Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:47.1472809Z ##[endgroup] 2025-03-17T19:00:47.5017432Z NOTE: s3-prefix specified, ignoring name parameter 2025-03-17T19:00:47.5017941Z With the provided path, there will be 1 file uploaded 2025-03-17T19:00:47.5018464Z Uploading to s3 prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:47.5057477Z Starting upload of test-jsons-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:47.6269211Z Finished upload of test-jsons-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:47.6456087Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-03-17T19:00:47.6456459Z with: 2025-03-17T19:00:47.6456707Z s3-bucket: gha-artifacts 2025-03-17T19:00:47.6457050Z s3-prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:47.6457428Z retention-days: 14 2025-03-17T19:00:47.6457707Z if-no-files-found: error 2025-03-17T19:00:47.6458008Z path: test-reports-*.zip 2025-03-17T19:00:47.6458309Z name: artifact 2025-03-17T19:00:47.6458559Z region: us-east-1 2025-03-17T19:00:47.6458807Z env: 2025-03-17T19:00:47.6459028Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:47.6459518Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:47.6460183Z ##[endgroup] 2025-03-17T19:00:47.9671508Z NOTE: s3-prefix specified, ignoring name parameter 2025-03-17T19:00:47.9672316Z With the provided path, there will be 1 file uploaded 2025-03-17T19:00:47.9673216Z Uploading to s3 prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:47.9712554Z Starting upload of test-reports-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:48.2237858Z Finished upload of test-reports-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:48.2418466Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-03-17T19:00:48.2418844Z with: 2025-03-17T19:00:48.2419090Z s3-bucket: gha-artifacts 2025-03-17T19:00:48.2419439Z s3-prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:48.2419834Z retention-days: 14 2025-03-17T19:00:48.2420114Z if-no-files-found: ignore 2025-03-17T19:00:48.2420411Z path: logs-*.zip 2025-03-17T19:00:48.2420668Z name: artifact 2025-03-17T19:00:48.2420920Z region: us-east-1 2025-03-17T19:00:48.2421168Z env: 2025-03-17T19:00:48.2421408Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:48.2421895Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:48.2422416Z ##[endgroup] 2025-03-17T19:00:48.5636601Z NOTE: s3-prefix specified, ignoring name parameter 2025-03-17T19:00:48.5637455Z With the provided path, there will be 1 file uploaded 2025-03-17T19:00:48.5637949Z Uploading to s3 prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:48.5676030Z Starting upload of logs-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:48.6787639Z Finished upload of logs-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:48.6967244Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-03-17T19:00:48.6967621Z with: 2025-03-17T19:00:48.6967871Z s3-bucket: gha-artifacts 2025-03-17T19:00:48.6968222Z s3-prefix: pytorch/pytorch/13905937446/1/artifact 2025-03-17T19:00:48.6968605Z retention-days: 14 2025-03-17T19:00:48.6968906Z if-no-files-found: ignore 2025-03-17T19:00:48.6969305Z path: debug-*.zip 2025-03-17T19:00:48.6969563Z name: artifact 2025-03-17T19:00:48.6969803Z region: us-east-1 2025-03-17T19:00:48.6970054Z env: 2025-03-17T19:00:48.6970289Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:48.6970775Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:48.6971297Z ##[endgroup] 2025-03-17T19:00:49.0118317Z No files were found with the provided path: debug-*.zip. No artifacts will be uploaded. 2025-03-17T19:00:49.0304926Z ##[group]Run # shellcheck disable=SC2156 2025-03-17T19:00:49.0305340Z # shellcheck disable=SC2156 2025-03-17T19:00:49.0305962Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2025-03-17T19:00:49.0312065Z shell: /usr/bin/bash -e {0} 2025-03-17T19:00:49.0312366Z env: 2025-03-17T19:00:49.0312610Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:49.0313090Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:49.0313632Z ##[endgroup] 2025-03-17T19:00:49.2685460Z Prepare all required actions 2025-03-17T19:00:49.2685892Z Getting action download info 2025-03-17T19:00:49.3749665Z ##[group]Run ./.github/actions/upload-utilization-stats 2025-03-17T19:00:49.3750072Z with: 2025-03-17T19:00:49.3750308Z job_id: 38909654187 2025-03-17T19:00:49.3750751Z job_name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T19:00:49.3751256Z workflow_name: pull 2025-03-17T19:00:49.3751534Z workflow_run_id: 13905937446 2025-03-17T19:00:49.3751836Z workflow_attempt: 1 2025-03-17T19:00:49.3752099Z env: 2025-03-17T19:00:49.3752344Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:49.3752836Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:49.3753359Z ##[endgroup] 2025-03-17T19:00:49.3780606Z ##[group]Run echo "workflow_id: 13905937446" 2025-03-17T19:00:49.3781000Z echo "workflow_id: 13905937446" 2025-03-17T19:00:49.3781375Z echo "workflow_attempt: 1" 2025-03-17T19:00:49.3781789Z echo "workflow_Name: pull" 2025-03-17T19:00:49.3782125Z echo "job_id: 38909654187" 2025-03-17T19:00:49.3782676Z echo "job_name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)" 2025-03-17T19:00:49.3788622Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:49.3789038Z env: 2025-03-17T19:00:49.3789286Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:49.3789783Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:49.3790310Z ##[endgroup] 2025-03-17T19:00:49.3812218Z workflow_id: 13905937446 2025-03-17T19:00:49.3812792Z workflow_attempt: 1 2025-03-17T19:00:49.3813101Z workflow_Name: pull 2025-03-17T19:00:49.3813600Z job_id: 38909654187 2025-03-17T19:00:49.3814068Z job_name: linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge) 2025-03-17T19:00:49.3868676Z ##[group]Run nick-fields/retry@v3.0.0 2025-03-17T19:00:49.3869016Z with: 2025-03-17T19:00:49.3869258Z shell: bash 2025-03-17T19:00:49.3869505Z timeout_minutes: 5 2025-03-17T19:00:49.3869773Z max_attempts: 5 2025-03-17T19:00:49.3870031Z retry_wait_seconds: 30 2025-03-17T19:00:49.3870523Z command: set -eu python3 -m pip install python-dateutil==2.8.2 boto3==1.35.42 pandas==2.1.3 2025-03-17T19:00:49.3871076Z polling_interval_seconds: 1 2025-03-17T19:00:49.3871382Z warning_on_retry: true 2025-03-17T19:00:49.3871668Z continue_on_error: false 2025-03-17T19:00:49.3871946Z env: 2025-03-17T19:00:49.3872178Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:49.3872660Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:49.3873190Z ##[endgroup] 2025-03-17T19:00:49.7183899Z Defaulting to user installation because normal site-packages is not writeable 2025-03-17T19:00:49.7977100Z Collecting python-dateutil==2.8.2 2025-03-17T19:00:49.8188587Z Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) 2025-03-17T19:00:50.7708963Z Collecting boto3==1.35.42 2025-03-17T19:00:50.7756159Z Downloading boto3-1.35.42-py3-none-any.whl (139 kB) 2025-03-17T19:00:51.2866426Z Collecting pandas==2.1.3 2025-03-17T19:00:51.2930269Z Downloading pandas-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB) 2025-03-17T19:00:51.4365169Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil==2.8.2) (1.15.0) 2025-03-17T19:00:51.4412211Z Requirement already satisfied: botocore<1.36.0,>=1.35.42 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (1.35.99) 2025-03-17T19:00:51.4416239Z Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.4) 2025-03-17T19:00:51.4421633Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.0) 2025-03-17T19:00:51.5362774Z Collecting tzdata>=2022.1 2025-03-17T19:00:51.5398233Z Downloading tzdata-2025.1-py2.py3-none-any.whl (346 kB) 2025-03-17T19:00:51.5494590Z Requirement already satisfied: pytz>=2020.1 in /usr/lib/python3.9/site-packages (from pandas==2.1.3) (2022.7.1) 2025-03-17T19:00:52.3758010Z Collecting numpy<2,>=1.22.4 2025-03-17T19:00:52.3796617Z Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) 2025-03-17T19:00:52.5785515Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.42->boto3==1.35.42) (1.25.10) 2025-03-17T19:00:52.7498194Z Installing collected packages: python-dateutil, tzdata, numpy, pandas, boto3 2025-03-17T19:00:57.5987120Z Attempting uninstall: boto3 2025-03-17T19:00:57.5988094Z Found existing installation: boto3 1.35.33 2025-03-17T19:00:57.6084354Z Uninstalling boto3-1.35.33: 2025-03-17T19:00:57.6096741Z Successfully uninstalled boto3-1.35.33 2025-03-17T19:00:57.6617478Z Successfully installed boto3-1.35.42 numpy-1.26.4 pandas-2.1.3 python-dateutil-2.8.2 tzdata-2025.1 2025-03-17T19:00:58.4677967Z Command completed after 1 attempt(s). 2025-03-17T19:00:58.4733033Z ##[group]Run python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-03-17T19:00:58.4733779Z python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-03-17T19:00:58.4734359Z  --workflow-run-id "13905937446" \ 2025-03-17T19:00:58.4734731Z  --workflow-name "pull" \ 2025-03-17T19:00:58.4735093Z  --workflow-run-attempt "1" \ 2025-03-17T19:00:58.4735449Z  --job-id "38909654187" \ 2025-03-17T19:00:58.4735994Z  --job-name "linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)" 2025-03-17T19:00:58.4742361Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:00:58.4742776Z env: 2025-03-17T19:00:58.4743025Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:00:58.4743528Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:00:58.4744078Z ##[endgroup] 2025-03-17T19:00:59.9518149Z repo: pytorch/pytorch 2025-03-17T19:00:59.9518696Z Downloading logs-test-dynamo_wrapped-1-3-linux.2xlarge_38909654187.zip 2025-03-17T19:00:59.9519216Z Converted Log Model: UtilizationMetadata: 2025-03-17T19:00:59.9520622Z UtilizationMetadata(level='metadata', workflow_id='13905937446', job_id='38909654187', workflow_name='pull', job_name='linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)', usage_collect_interval=1.0, data_model_version=1.0, start_at=1742233500, gpu_count=0, cpu_count=8, gpu_type='', error=None) 2025-03-17T19:00:59.9522130Z [Db Segments] detected pytest cmd: 13, generated segments: 13 2025-03-17T19:00:59.9522562Z [db model] Peek db timeseries 2025-03-17T19:00:59.9522859Z :{ 2025-03-17T19:00:59.9523091Z "created_at": 1742238059, 2025-03-17T19:00:59.9523395Z "type": "utilization", 2025-03-17T19:00:59.9523908Z "tags": [ 2025-03-17T19:00:59.9524150Z "record" 2025-03-17T19:00:59.9524408Z ], 2025-03-17T19:00:59.9524648Z "time_stamp": 1742233500, 2025-03-17T19:00:59.9524960Z "repo": "pytorch/pytorch", 2025-03-17T19:00:59.9525270Z "workflow_id": 13905937446, 2025-03-17T19:00:59.9525571Z "run_attempt": 1, 2025-03-17T19:00:59.9525845Z "job_id": 38909654187, 2025-03-17T19:00:59.9526136Z "workflow_name": "pull", 2025-03-17T19:00:59.9526602Z "job_name": "linux-focal-py3.13-clang10 / test (dynamo_wrapped, 1, 3, linux.2xlarge)", 2025-03-17T19:00:59.9527112Z "json_data": "{}" 2025-03-17T19:00:59.9527372Z } 2025-03-17T19:00:59.9527906Z Writing 1 documents to S3 ossci-utilization/util_metadata/v_1.0/pytorch/pytorch/13905937446/1/38909654187/metadata 2025-03-17T19:00:59.9528955Z Done! Finish writing document to S3 ossci-utilization/util_metadata/v_1.0/pytorch/pytorch/13905937446/1/38909654187/metadata 2025-03-17T19:00:59.9529970Z Writing 900 documents to S3 ossci-utilization/util_timeseries/v_1.0/pytorch/pytorch/13905937446/1/38909654187/time_series 2025-03-17T19:00:59.9531166Z Done! Finish writing document to S3 ossci-utilization/util_timeseries/v_1.0/pytorch/pytorch/13905937446/1/38909654187/time_series 2025-03-17T19:01:00.0605869Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2025-03-17T19:01:00.0606346Z with: 2025-03-17T19:01:00.0606577Z env: 2025-03-17T19:01:00.0606815Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:01:00.0607316Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:01:00.0607848Z ##[endgroup] 2025-03-17T19:01:00.0630631Z ##[group]Run set -eou pipefail 2025-03-17T19:01:00.0630995Z set -eou pipefail 2025-03-17T19:01:00.0631290Z  2025-03-17T19:01:00.0631690Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2025-03-17T19:01:00.0632194Z for _ in $(seq 1440); do 2025-03-17T19:01:00.0632556Z  # Break if no ssh session exists anymore 2025-03-17T19:01:00.0632944Z  if [ "$(who)" = "" ]; then 2025-03-17T19:01:00.0633274Z  break 2025-03-17T19:01:00.0633533Z  fi 2025-03-17T19:01:00.0633834Z  echo "." 2025-03-17T19:01:00.0634102Z  sleep 5 2025-03-17T19:01:00.0634362Z done 2025-03-17T19:01:00.0640187Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:01:00.0640598Z env: 2025-03-17T19:01:00.0640833Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:01:00.0641315Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:01:00.0641839Z ##[endgroup] 2025-03-17T19:01:00.0664003Z Holding runner for 2 hours until all ssh sessions have logged out 2025-03-17T19:01:00.0735717Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-03-17T19:01:00.0736317Z # ignore expansion of "docker ps -q" since it could be empty 2025-03-17T19:01:00.0736999Z # shellcheck disable=SC2046 2025-03-17T19:01:00.0737383Z docker stop $(docker ps -q) || true 2025-03-17T19:01:00.0737761Z # Prune all of the docker images 2025-03-17T19:01:00.0738118Z docker system prune -af 2025-03-17T19:01:00.0743361Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:01:00.0743768Z env: 2025-03-17T19:01:00.0744009Z GIT_DEFAULT_BRANCH: main 2025-03-17T19:01:00.0744497Z DOCKER_CONTAINER_ID: aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:01:00.0745024Z ##[endgroup] 2025-03-17T19:01:00.9808934Z aa508029845b 2025-03-17T19:01:01.3119675Z Deleted Containers: 2025-03-17T19:01:01.3120135Z aa508029845bd585f159f36b255ff71e021a60eb6f722864c03887f86723b5b6 2025-03-17T19:01:01.3120498Z 2025-03-17T19:01:04.3061085Z Deleted Images: 2025-03-17T19:01:04.3062280Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10:70252cb1aa0d6173d24140841afd02bc363684c5 2025-03-17T19:01:04.3063945Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-py3.13-clang10@sha256:2be986ebdf9f912bf00998d401fc1f11365c93a5dd9c7f239a9fcf15540db4d8 2025-03-17T19:01:04.3065296Z deleted: sha256:e6cba42f176eca517d1f8851c8f196198dcfd7dec3dbfdd0d4505d8ee86a6e4a 2025-03-17T19:01:04.3066004Z deleted: sha256:930a2d3b383850e7736bf06bc4a0553845d719f675a724a4fc303181b9d0a483 2025-03-17T19:01:04.3066740Z deleted: sha256:505e7c0665329e692ba905ff9215ed1a199c54bba8076169bbf706437b77a77b 2025-03-17T19:01:04.3067414Z deleted: sha256:5224414287461dcca85d545a05cc332a10bbb93e095e6d5b12be78a060031968 2025-03-17T19:01:04.3068091Z deleted: sha256:aeef50471bc6139fe2c4716bd05ab5f35d97d152002b707d05a02140dd14df2c 2025-03-17T19:01:04.3068854Z deleted: sha256:67436aa20a35cf052329cbdfe7bd93a4e9e90cdb4143fbe782ea3ff8c783266d 2025-03-17T19:01:04.3069530Z deleted: sha256:7691c014102087de1053a414d794ce5b70db0c430675d7a0a85d037849431af5 2025-03-17T19:01:04.3070204Z deleted: sha256:6a27e4ccf184f71c6318681bc87ee487269ff6e0b242b26b7889d8ca0eaf676f 2025-03-17T19:01:04.3070889Z deleted: sha256:a1b6752f9cc9ee4f27308a18291db8a19da99aac902aad5269f9502430e99271 2025-03-17T19:01:04.3071708Z deleted: sha256:2c169b57e02097815e219a992150ab78b73bbaeeee3b0966fdc1aa71d4fa71fc 2025-03-17T19:01:04.3072389Z deleted: sha256:3e8585ea91520811d6e156b2086018a3eb50eb51f6dfc61f5ba2c5d56f37761d 2025-03-17T19:01:04.3073066Z deleted: sha256:711a3d61a60a79e7da92252e695f12b09813186a11fd0e23564c748fc464b422 2025-03-17T19:01:04.3073756Z deleted: sha256:d889f03537a6ebc0c1f233e2edc70af95ada98e1c07d704ee3bdd384894c370e 2025-03-17T19:01:04.3074450Z deleted: sha256:ad0019b2aa547b0e643c7353d474aca8d6c397f687c6078f8db49bf946e2c178 2025-03-17T19:01:04.3075144Z deleted: sha256:5e6532cbcea2d3c9b908178dae9c5752b8b2b7fd7f466547aa25e3f27af86119 2025-03-17T19:01:04.3075825Z deleted: sha256:c5db0bf99d5203b7f4633a9742f00449d639e7233d343421cf7519cf0242dc07 2025-03-17T19:01:04.3076507Z deleted: sha256:bb7008d7ccd8994f4676d9dedd03be2ccf13f57811c594e8afa9ba093653587e 2025-03-17T19:01:04.3077200Z deleted: sha256:ec79b7a601b998c6c8b39deeff0da5471e2a2af995242e9010329744a03eda79 2025-03-17T19:01:04.3077888Z deleted: sha256:5a9c1a5eb1a081d1472f1afbe88e8d4957f7828a345be24ce91579de660fee6c 2025-03-17T19:01:04.3078568Z deleted: sha256:c2299a6465e36d595070cb7fe9c28466c7d67238ef5d1b08729ea60695065296 2025-03-17T19:01:04.3079249Z deleted: sha256:c65e1054ae576cca0e4875babc3e941eaec70f4cc14f896ae5bac0b3f24e2a3b 2025-03-17T19:01:04.3079932Z deleted: sha256:73dd156a30c640419f4fc460f096810ecc47491d2b4b07236de2f40d059e5042 2025-03-17T19:01:04.3080602Z deleted: sha256:d063d786f32f596fefe9afe82a384b0425d854852d80092a798c2f121caa3fd4 2025-03-17T19:01:04.3081274Z deleted: sha256:5308d27907d1715d3d35ef322e57d626fef0e2303e6d3f1afc4574657de81e23 2025-03-17T19:01:04.3081985Z deleted: sha256:14a74f152415f7472678b0b84cd7798b22b5ef2b4179192c59af8dd2d9e3893b 2025-03-17T19:01:04.3082664Z deleted: sha256:b9abbcfb067157201e71dd93610fea7968febff5973fc7e0f75b136fa7185e1e 2025-03-17T19:01:04.3083350Z deleted: sha256:29629b66f96f3fd0181e342c1f0fee783b6f7070bb0595ab3ff33abed0b2a676 2025-03-17T19:01:04.3084037Z deleted: sha256:4ecc8bf0d02e3feaa7a6469a8ba09488d64cc8c372a260518a28ef08883be5ae 2025-03-17T19:01:04.3084749Z deleted: sha256:f859aed7499e1622110a28922de45070741723f26377ec286fd2e8d34e4eb6ff 2025-03-17T19:01:04.3085432Z deleted: sha256:5bb9cb45fa746ebd258851f5371ae1af7a0a3635bc262ae216dd38cb0d4c4a7b 2025-03-17T19:01:04.3086113Z deleted: sha256:5a1337e5f7e9199eb34369033e553b3a914abdd23408558267575f56ab4709e4 2025-03-17T19:01:04.3086790Z deleted: sha256:9fc8213dbb65142cfb0e907676dd22139b5c396dd1c1fdf9c55879720c3f735e 2025-03-17T19:01:04.3087469Z deleted: sha256:073d6463071cbd3710bae4882e04cc7dcbf41637eb73052cd7f487334ab5eb42 2025-03-17T19:01:04.3088151Z deleted: sha256:30a80155e18d2b1a3b5118b4cd140294c78e9e3111bfcb6d2e5dea894f428411 2025-03-17T19:01:04.3088834Z deleted: sha256:0ef1755108c36dc165d31b9c5852da6ab0f9f592266b33fab32ae1f4148ead86 2025-03-17T19:01:04.3089515Z deleted: sha256:116ccd0216d08829d173678ab2307c22614f1f0dd1884e39e8c8ff5ee3fc43e9 2025-03-17T19:01:04.3090234Z deleted: sha256:d1d195a5843c981a6886bdd0443ab0f407a7619d803de046a5b92a76f2331c42 2025-03-17T19:01:04.3090924Z deleted: sha256:87417c8235d7caa4dbb4751215c6b82d1f94567b08dbbda00a495845099ef408 2025-03-17T19:01:04.3091615Z deleted: sha256:b62b12f457bdb58ff6a838235d2b8b9c247967b27d8dcfd6c99fdaa19389a3a8 2025-03-17T19:01:04.3092370Z deleted: sha256:186d56bc4e92ca02d4d7fc02dfe0ffad53df190abbb9b9d85b86b21abfc4b4c3 2025-03-17T19:01:04.3093230Z deleted: sha256:b286ffd6f4604d15b97326ceca5807cd1d7320fcb1ddd7931f1e6f1a060cb934 2025-03-17T19:01:04.3094150Z deleted: sha256:b74403a3319ea27943562af7926839c8a0fa64d4e55e4f576e5645c24046e989 2025-03-17T19:01:04.3094874Z deleted: sha256:728e4744cf7c2df43311bab3a069633071263147debee25e876af63e0a319263 2025-03-17T19:01:04.3095551Z deleted: sha256:19a149967f000a298d5ee9621c86693770dc771e13585de564ed7dd7616f9381 2025-03-17T19:01:04.3096217Z deleted: sha256:002e2554e709a07857b2ca602771547b3d572f0f14a841b4ef0291bbd25e0f16 2025-03-17T19:01:04.3096906Z deleted: sha256:bdcafeb14c8a0628ce3560bdf3023dfb621fbba9972bb8a03404cad2cb235094 2025-03-17T19:01:04.3097597Z deleted: sha256:0d59fac7f973adc87001b3f22b30e8963f5e0880764dd2f415c581ec9418f5e5 2025-03-17T19:01:04.3098385Z deleted: sha256:bbaea6d30a4ecacd627992cf14863e93683f545af220766496b98ec9488f308c 2025-03-17T19:01:04.3099077Z deleted: sha256:e6f9f8b9fb5d27f898a681d1ecb1889f59507cd5f78bc68fbeee68a06da360a3 2025-03-17T19:01:04.3099782Z deleted: sha256:cfab0b20f2678b400786a13af54080b38503e113f6cbd6c98c10b717b5a8bc79 2025-03-17T19:01:04.3100460Z deleted: sha256:b684a251f4987f75021928c84558ea8cae8f1a7631e0ad9b6bcd4cbb7ed14a8b 2025-03-17T19:01:04.3101141Z deleted: sha256:5e4cca2b702757fa845032d6e6fe07c0a0d6c90771c383b4cea7ff40723d5fa4 2025-03-17T19:01:04.3101827Z deleted: sha256:f21cf4cd814e59dd71bd728373e71983ef4e599be2b056c499230a6ed00a15a3 2025-03-17T19:01:04.3102507Z deleted: sha256:d951a771afcbe8d83e6e5544c714c7650bca6a69d4731c2a9e429816f9e84c8b 2025-03-17T19:01:04.3103362Z deleted: sha256:9450553b38e0786c2b2e2ed2c9ae2e3cb4dd6bff39794e3819218f266d7a3060 2025-03-17T19:01:04.3104038Z deleted: sha256:278616d05967cb61f48a3c725cc7195e553823664b7101a7e25ab830427f9b7f 2025-03-17T19:01:04.3104737Z deleted: sha256:e439ccbda91e210a586738d624cfbe51e9aad7fba6ca5440f43015151fa27907 2025-03-17T19:01:04.3106027Z deleted: sha256:ddbfc5abc42ce3ca17a1f6edffcae52e0de5f829489c8e6361bcfb1661bfcb12 2025-03-17T19:01:04.3106911Z deleted: sha256:6e718749bf1bc6d39197993968729199241f6c2f17db2c96098213d1668c50a7 2025-03-17T19:01:04.3107608Z deleted: sha256:66dacbcdbdb8f60dc8ab6991876bee78e2f3d3f590449bd7fbb1c8bdedd5a51f 2025-03-17T19:01:04.3108294Z deleted: sha256:035694e19ab0ce41062331a96a89aeb4c71ac58edcd303822a46a55a9d98b100 2025-03-17T19:01:04.3108980Z deleted: sha256:a84967c17e157b5b03d13b654caeab69505f6de1e54839832b6fc551d1acda87 2025-03-17T19:01:04.3109659Z deleted: sha256:17293c5aff14bdce9a91c1b8b56298bf8886931eb154851798ce677e4096a81b 2025-03-17T19:01:04.3110338Z deleted: sha256:2b7585374d4b0b779990297ecb8b0655c9300becbc1248f2132abdafabd86e0c 2025-03-17T19:01:04.3111014Z deleted: sha256:1c19142f724fe6688c419004d01225dcd9906a9b6f766f9b2b9ec64df46f9fb8 2025-03-17T19:01:04.3111701Z deleted: sha256:aeb2f92f63d41390c8db73349958efcab4af3d5932eaeda18cffbd3a7b5045e8 2025-03-17T19:01:04.3112401Z deleted: sha256:f86ffcdfdad50ea2e2ed76dac61785981953a473574c0dd1f1cb2a5e81c363ac 2025-03-17T19:01:04.3113091Z deleted: sha256:75f8ccf18d82bd237962be0ec72038c294b7549427b625b2f2af591bd6c0d136 2025-03-17T19:01:04.3113769Z deleted: sha256:1a3480429c692ba6c9a6f268aa7fe9177e12028dc1e1bb6b33c9ce27de25832c 2025-03-17T19:01:04.3114439Z deleted: sha256:18c86063d61709398d1c361b249f86257ef21b3e2d5c01161a70e4b4d1d95254 2025-03-17T19:01:04.3115117Z deleted: sha256:faff59a79cff7555b6fd776e6eb18192c010a2149eede3d80615166c95218dc1 2025-03-17T19:01:04.3115796Z deleted: sha256:861987832732de48a90c0f77782c5ccf047effd8b8ca51d925ee5af53dfb33cc 2025-03-17T19:01:04.3116470Z deleted: sha256:40331af181a4f8e5f002ee137b42ed142d44798eb33968284fcacd19d3a8be0a 2025-03-17T19:01:04.3117220Z deleted: sha256:5b7bf75b3ecf3949295fa5ae9c10d5f1f716c0f94bae87c54fda8df24c86b917 2025-03-17T19:01:04.3117918Z deleted: sha256:9dbd1bbd3074aa992fba995d9ef725b303bec36b2ca7f280429c926b5ecba007 2025-03-17T19:01:04.3118622Z deleted: sha256:b4bcac81c67a1bf6f08d798f7e6fa64238ca78d8fd9717ecc20a63c52c2d7496 2025-03-17T19:01:04.3119318Z deleted: sha256:e0a3a1e873392d8e9a5a4e5db509f3a631b976de3cac1a608d858007ff04cbf9 2025-03-17T19:01:04.3119996Z deleted: sha256:35e2ec60186f800e15886631a25908fed2f91981e226c52f2f41c0c9fd9706f7 2025-03-17T19:01:04.3120662Z deleted: sha256:78c3d9bea8b10439a786741831770c707a9f29fa8899de6186e14075ee20605f 2025-03-17T19:01:04.3121322Z deleted: sha256:e5770f83923f919a95d02f612578ab31274d590ca91e02964c17a4928b9c63f1 2025-03-17T19:01:04.3122036Z deleted: sha256:d8cecb46ae446dc10b80c59742f1ff61ede7f989d02ce42f80850a5b8c9b0c85 2025-03-17T19:01:04.3122723Z deleted: sha256:dea12facc42ab24336ac5f0b3fa5c52093933059253170bc7e6bbd09741d6f6b 2025-03-17T19:01:04.3123409Z deleted: sha256:f5178fe523d9ce39eb7c179843f6347b1b5ecf4013127ea61bdefbc9a67c4dc9 2025-03-17T19:01:04.3124093Z deleted: sha256:a09f70b2b470f2355d8d76c0679e5af4869ba2ca99528b7c1dd34e0c80d3bcaf 2025-03-17T19:01:04.3124843Z deleted: sha256:83b8a2c6d05ca30aebc777b7ca48473c8e2ea303693a081e905cdb9faa825baa 2025-03-17T19:01:04.3125539Z deleted: sha256:cced3c5caa7f256ccfe90d07d4cc479c3c0a6e1618e85314502eb57d6d91ec91 2025-03-17T19:01:04.3126226Z deleted: sha256:235161f90012c725d99f0227039ef822ed6d21dae85e359dc34cdbc81a6417ef 2025-03-17T19:01:04.3126911Z deleted: sha256:5a65ffc1e05b4d26e98afe4b618a5bf11e8258f919ccfb9a407f2744ab3ee595 2025-03-17T19:01:04.3127608Z deleted: sha256:4f549cca47410dfe53e8fa4ab6b355e025dbeecad47305e334c716772a112581 2025-03-17T19:01:04.3128289Z deleted: sha256:62ea77316bd516045ec0173d4a11313bf794683617f3bb95a052ec2d1b6d52cd 2025-03-17T19:01:04.3128966Z deleted: sha256:877ed97efed07fb79a50a15695bb827b7765ace82f61c32cef16ee4f09f446c0 2025-03-17T19:01:04.3129658Z deleted: sha256:fffe76c64ef2dee2d80a8bb3ad13d65d596d04a45510b1956a976a69215dae92 2025-03-17T19:01:04.3130068Z 2025-03-17T19:01:04.3130212Z Total reclaimed space: 10.75GB 2025-03-17T19:01:04.3229432Z Post job cleanup. 2025-03-17T19:01:04.3289441Z Post job cleanup. 2025-03-17T19:01:04.4323657Z [command]/usr/bin/git version 2025-03-17T19:01:04.4368958Z git version 2.47.1 2025-03-17T19:01:04.4401796Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/d701caad-6397-4908-8118-52585d481805/.gitconfig' 2025-03-17T19:01:04.4409295Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/d701caad-6397-4908-8118-52585d481805' before making global git config changes 2025-03-17T19:01:04.4410307Z Adding repository directory to the temporary git global config as a safe directory 2025-03-17T19:01:04.4414201Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-03-17T19:01:04.4449888Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-03-17T19:01:04.4476324Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-03-17T19:01:04.4754622Z Entering 'android/libs/fbjni' 2025-03-17T19:01:04.4804703Z Entering 'third_party/FP16' 2025-03-17T19:01:04.4852648Z Entering 'third_party/FXdiv' 2025-03-17T19:01:04.4901812Z Entering 'third_party/NNPACK' 2025-03-17T19:01:04.4950438Z Entering 'third_party/NVTX' 2025-03-17T19:01:04.4999316Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T19:01:04.5048885Z Entering 'third_party/XNNPACK' 2025-03-17T19:01:04.5112626Z Entering 'third_party/benchmark' 2025-03-17T19:01:04.5161181Z Entering 'third_party/composable_kernel' 2025-03-17T19:01:04.5214924Z Entering 'third_party/cpp-httplib' 2025-03-17T19:01:04.5263282Z Entering 'third_party/cpuinfo' 2025-03-17T19:01:04.5312318Z Entering 'third_party/cudnn_frontend' 2025-03-17T19:01:04.5362364Z Entering 'third_party/cutlass' 2025-03-17T19:01:04.5420488Z Entering 'third_party/eigen' 2025-03-17T19:01:04.5471936Z Entering 'third_party/fbgemm' 2025-03-17T19:01:04.5521450Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T19:01:04.5569511Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T19:01:04.5619929Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T19:01:04.5675946Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T19:01:04.5723762Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T19:01:04.5773970Z Entering 'third_party/flash-attention' 2025-03-17T19:01:04.5822683Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T19:01:04.5876864Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T19:01:04.5934386Z Entering 'third_party/flatbuffers' 2025-03-17T19:01:04.5985244Z Entering 'third_party/fmt' 2025-03-17T19:01:04.6033912Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T19:01:04.6083904Z Entering 'third_party/gloo' 2025-03-17T19:01:04.6132349Z Entering 'third_party/googletest' 2025-03-17T19:01:04.6180647Z Entering 'third_party/ideep' 2025-03-17T19:01:04.6228703Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T19:01:04.6283477Z Entering 'third_party/ittapi' 2025-03-17T19:01:04.6332119Z Entering 'third_party/kineto' 2025-03-17T19:01:04.6380628Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T19:01:04.6427804Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T19:01:04.6477492Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T19:01:04.6525823Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T19:01:04.6573910Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T19:01:04.6621511Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T19:01:04.6671274Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T19:01:04.6719034Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T19:01:04.6767644Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T19:01:04.6815892Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T19:01:04.6865637Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T19:01:04.6913506Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T19:01:04.6962503Z Entering 'third_party/kleidiai' 2025-03-17T19:01:04.7012299Z Entering 'third_party/mimalloc' 2025-03-17T19:01:04.7061637Z Entering 'third_party/nlohmann' 2025-03-17T19:01:04.7111821Z Entering 'third_party/onnx' 2025-03-17T19:01:04.7176717Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T19:01:04.7227509Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T19:01:04.7278703Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T19:01:04.7326062Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T19:01:04.7373533Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T19:01:04.7420641Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T19:01:04.7469468Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T19:01:04.7516813Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T19:01:04.7565694Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T19:01:04.7612502Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T19:01:04.7663403Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T19:01:04.7712502Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T19:01:04.7780782Z Entering 'third_party/pocketfft' 2025-03-17T19:01:04.7828972Z Entering 'third_party/protobuf' 2025-03-17T19:01:04.7879933Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T19:01:04.7928021Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T19:01:04.7978870Z Entering 'third_party/psimd' 2025-03-17T19:01:04.8027678Z Entering 'third_party/pthreadpool' 2025-03-17T19:01:04.8077317Z Entering 'third_party/pybind11' 2025-03-17T19:01:04.8126011Z Entering 'third_party/python-peachpy' 2025-03-17T19:01:04.8174505Z Entering 'third_party/sleef' 2025-03-17T19:01:04.8222689Z Entering 'third_party/tensorpipe' 2025-03-17T19:01:04.8270955Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T19:01:04.8317957Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T19:01:04.8365459Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T19:01:04.8412616Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T19:01:04.8459873Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T19:01:04.8526321Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-03-17T19:01:04.8546572Z http.https://github.com/.extraheader 2025-03-17T19:01:04.8556581Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-03-17T19:01:04.8584645Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-03-17T19:01:04.8846701Z Entering 'android/libs/fbjni' 2025-03-17T19:01:04.8879587Z http.https://github.com/.extraheader 2025-03-17T19:01:04.8909742Z Entering 'third_party/FP16' 2025-03-17T19:01:04.8944096Z http.https://github.com/.extraheader 2025-03-17T19:01:04.8973404Z Entering 'third_party/FXdiv' 2025-03-17T19:01:04.9005829Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9035270Z Entering 'third_party/NNPACK' 2025-03-17T19:01:04.9067730Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9097452Z Entering 'third_party/NVTX' 2025-03-17T19:01:04.9129833Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9162983Z Entering 'third_party/VulkanMemoryAllocator' 2025-03-17T19:01:04.9194268Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9223654Z Entering 'third_party/XNNPACK' 2025-03-17T19:01:04.9257708Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9301630Z Entering 'third_party/benchmark' 2025-03-17T19:01:04.9334935Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9365708Z Entering 'third_party/composable_kernel' 2025-03-17T19:01:04.9399196Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9435439Z Entering 'third_party/cpp-httplib' 2025-03-17T19:01:04.9469482Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9498907Z Entering 'third_party/cpuinfo' 2025-03-17T19:01:04.9531707Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9562034Z Entering 'third_party/cudnn_frontend' 2025-03-17T19:01:04.9594620Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9623845Z Entering 'third_party/cutlass' 2025-03-17T19:01:04.9656698Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9695783Z Entering 'third_party/eigen' 2025-03-17T19:01:04.9729032Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9761122Z Entering 'third_party/fbgemm' 2025-03-17T19:01:04.9794396Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9824733Z Entering 'third_party/fbgemm/third_party/asmjit' 2025-03-17T19:01:04.9857410Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9887277Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2025-03-17T19:01:04.9919738Z http.https://github.com/.extraheader 2025-03-17T19:01:04.9949562Z Entering 'third_party/fbgemm/third_party/cutlass' 2025-03-17T19:01:04.9982744Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0018369Z Entering 'third_party/fbgemm/third_party/googletest' 2025-03-17T19:01:05.0052358Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0081749Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2025-03-17T19:01:05.0114472Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0145414Z Entering 'third_party/flash-attention' 2025-03-17T19:01:05.0178467Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0208763Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-03-17T19:01:05.0241306Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0277015Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-03-17T19:01:05.0310319Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0348940Z Entering 'third_party/flatbuffers' 2025-03-17T19:01:05.0381620Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0413389Z Entering 'third_party/fmt' 2025-03-17T19:01:05.0445944Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0475462Z Entering 'third_party/gemmlowp/gemmlowp' 2025-03-17T19:01:05.0507838Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0537726Z Entering 'third_party/gloo' 2025-03-17T19:01:05.0570636Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0600558Z Entering 'third_party/googletest' 2025-03-17T19:01:05.0633110Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0663126Z Entering 'third_party/ideep' 2025-03-17T19:01:05.0696245Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0724644Z Entering 'third_party/ideep/mkl-dnn' 2025-03-17T19:01:05.0757130Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0793674Z Entering 'third_party/ittapi' 2025-03-17T19:01:05.0826057Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0855809Z Entering 'third_party/kineto' 2025-03-17T19:01:05.0888895Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0918371Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-03-17T19:01:05.0950710Z http.https://github.com/.extraheader 2025-03-17T19:01:05.0980247Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-03-17T19:01:05.1012608Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1043963Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-03-17T19:01:05.1076673Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1106920Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-03-17T19:01:05.1139522Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1170240Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-03-17T19:01:05.1202717Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1231692Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-03-17T19:01:05.1264419Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1295730Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-03-17T19:01:05.1327564Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1358301Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-03-17T19:01:05.1390498Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1420552Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-03-17T19:01:05.1452788Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1483821Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-03-17T19:01:05.1516411Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1548269Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-03-17T19:01:05.1581302Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1611298Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-03-17T19:01:05.1644864Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1675704Z Entering 'third_party/kleidiai' 2025-03-17T19:01:05.1708738Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1740578Z Entering 'third_party/mimalloc' 2025-03-17T19:01:05.1773339Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1803206Z Entering 'third_party/nlohmann' 2025-03-17T19:01:05.1836063Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1866369Z Entering 'third_party/onnx' 2025-03-17T19:01:05.1899448Z http.https://github.com/.extraheader 2025-03-17T19:01:05.1946141Z Entering 'third_party/onnx/third_party/pybind11' 2025-03-17T19:01:05.1979004Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2010447Z Entering 'third_party/opentelemetry-cpp' 2025-03-17T19:01:05.2043312Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2074826Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-03-17T19:01:05.2106554Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2136403Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-03-17T19:01:05.2168493Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2198033Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-03-17T19:01:05.2229900Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2259218Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-03-17T19:01:05.2290888Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2321485Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-03-17T19:01:05.2353790Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2382865Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-03-17T19:01:05.2415180Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2444376Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-03-17T19:01:05.2476480Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2505665Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-03-17T19:01:05.2540360Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2571865Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-03-17T19:01:05.2604047Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2635419Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-03-17T19:01:05.2668258Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2717513Z Entering 'third_party/pocketfft' 2025-03-17T19:01:05.2750600Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2779677Z Entering 'third_party/protobuf' 2025-03-17T19:01:05.2812972Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2847613Z Entering 'third_party/protobuf/third_party/benchmark' 2025-03-17T19:01:05.2879664Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2909665Z Entering 'third_party/protobuf/third_party/googletest' 2025-03-17T19:01:05.2942781Z http.https://github.com/.extraheader 2025-03-17T19:01:05.2973434Z Entering 'third_party/psimd' 2025-03-17T19:01:05.3006234Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3035975Z Entering 'third_party/pthreadpool' 2025-03-17T19:01:05.3068980Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3098443Z Entering 'third_party/pybind11' 2025-03-17T19:01:05.3131187Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3161114Z Entering 'third_party/python-peachpy' 2025-03-17T19:01:05.3194520Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3224109Z Entering 'third_party/sleef' 2025-03-17T19:01:05.3257863Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3287132Z Entering 'third_party/tensorpipe' 2025-03-17T19:01:05.3319743Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3349152Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-03-17T19:01:05.3380906Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3409927Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-03-17T19:01:05.3442490Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3471641Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-03-17T19:01:05.3504097Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3532953Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-03-17T19:01:05.3565109Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3593546Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-03-17T19:01:05.3625727Z http.https://github.com/.extraheader 2025-03-17T19:01:05.3744231Z A job completed hook has been configured by the self-hosted runner administrator 2025-03-17T19:01:05.3771199Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2025-03-17T19:01:05.3776304Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-03-17T19:01:05.3776723Z ##[endgroup] 2025-03-17T19:01:12.1140241Z Cleaning up orphan processes