2023-05-06T07:25:39.7883595Z Requested labels: linux.gcp.a100.large 2023-05-06T07:25:39.7883833Z Job defined at: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main 2023-05-06T07:25:39.7884048Z Reusable workflow chain: 2023-05-06T07:25:39.7884159Z pytorch/pytorch/.github/workflows/inductor-perf-test-nightly.yml@refs/heads/main (d719f0276d69a8315b65f4c4500cfc1cdaddb025) 2023-05-06T07:25:39.7884299Z -> pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main (d719f0276d69a8315b65f4c4500cfc1cdaddb025) 2023-05-06T07:25:39.7884442Z Waiting for a runner to pick up this job... 2023-05-06T10:17:25.8702960Z Job is about to start running on the runner: gh-ci-gcp-a100-11 (repository) 2023-05-06T10:17:30.0715730Z Current runner version: '2.304.0' 2023-05-06T10:17:30.0723611Z Runner name: 'gh-ci-gcp-a100-11' 2023-05-06T10:17:30.0724256Z Runner group name: 'Default' 2023-05-06T10:17:30.0725054Z Machine name: 'gh-ci-gcp-a100-11' 2023-05-06T10:17:30.0727670Z ##[group]GITHUB_TOKEN Permissions 2023-05-06T10:17:30.0728513Z Actions: write 2023-05-06T10:17:30.0728889Z Checks: write 2023-05-06T10:17:30.0729191Z Contents: write 2023-05-06T10:17:30.0729606Z Deployments: write 2023-05-06T10:17:30.0729974Z Discussions: write 2023-05-06T10:17:30.0730275Z Issues: write 2023-05-06T10:17:30.0730627Z Metadata: read 2023-05-06T10:17:30.0731007Z Packages: write 2023-05-06T10:17:30.0731317Z Pages: write 2023-05-06T10:17:30.0731683Z PullRequests: write 2023-05-06T10:17:30.0732110Z RepositoryProjects: write 2023-05-06T10:17:30.0732461Z SecurityEvents: write 2023-05-06T10:17:30.0732858Z Statuses: write 2023-05-06T10:17:30.0733211Z ##[endgroup] 2023-05-06T10:17:30.0737075Z Secret source: Actions 2023-05-06T10:17:30.0738224Z Prepare workflow directory 2023-05-06T10:17:30.3459433Z Prepare all required actions 2023-05-06T10:17:30.3703634Z Getting action download info 2023-05-06T10:17:30.5883783Z Download action repository 'pytorch/test-infra@main' (SHA:4112fce32582a7d32cf5d3bc9ec2bb0ab26fca50) 2023-05-06T10:17:31.3111118Z Download action repository 'pytorch/pytorch@main' (SHA:44caa395cb6a7f3a4efece66df4d5608aae51a64) 2023-05-06T10:17:36.3777450Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2023-05-06T10:17:37.0069452Z Getting action download info 2023-05-06T10:17:37.1659488Z Download action repository 'malfet/checkout@silent-checkout' (SHA:c7b8fef48edfe1bca0044a44b1f7f7c4318a3076) 2023-05-06T10:17:37.6505146Z Getting action download info 2023-05-06T10:17:37.7901565Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2023-05-06T10:17:38.2364048Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main (d719f0276d69a8315b65f4c4500cfc1cdaddb025) 2023-05-06T10:17:38.2366122Z ##[group] Inputs 2023-05-06T10:17:38.2366572Z build-environment: linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:17:38.2368123Z test-matrix: {"include": [{"config": "inductor_huggingface_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 1, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 2, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 3, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 4, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 5, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 6, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}]} 2023-05-06T10:17:38.2369908Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:17:38.2370342Z sync-tag: 2023-05-06T10:17:38.2371183Z timeout-minutes: 720 2023-05-06T10:17:38.2371476Z use-gha: anything-non-empty-to-use-gha 2023-05-06T10:17:38.2371738Z ##[endgroup] 2023-05-06T10:17:38.2372335Z Complete job name: cuda11.8-py3.10-gcc7-sm80 / test (inductor_torchbench_perf, 2, 3, linux.gcp.a100.large) 2023-05-06T10:17:38.2993226Z A job started hook has been configured by the self-hosted runner administrator 2023-05-06T10:17:38.3173669Z ##[group]Run '/home/weiwangmeta/pre-job.sh' 2023-05-06T10:17:38.3197531Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:17:38.3197969Z ##[endgroup] 2023-05-06T10:17:38.3866397Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2023-05-06T10:17:38.3866747Z with: 2023-05-06T10:17:38.3866956Z submodules: recursive 2023-05-06T10:17:38.3867185Z fetch-depth: 0 2023-05-06T10:17:38.3867389Z env: 2023-05-06T10:17:38.3867584Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:17:38.3867813Z ##[endgroup] 2023-05-06T10:17:38.4088725Z ##[group]Run retry () { 2023-05-06T10:17:38.4089014Z retry () { 2023-05-06T10:17:38.4089300Z  $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*) 2023-05-06T10:17:38.4089603Z } 2023-05-06T10:17:38.4089816Z echo "${GITHUB_WORKSPACE}" 2023-05-06T10:17:38.4090317Z if [ -z "${NO_SUDO}" ]; then 2023-05-06T10:17:38.4090621Z  retry sudo rm -rf "${GITHUB_WORKSPACE}" 2023-05-06T10:17:38.4090918Z else 2023-05-06T10:17:38.4091161Z  retry rm -rf "${GITHUB_WORKSPACE}" 2023-05-06T10:17:38.4091398Z fi 2023-05-06T10:17:38.4091665Z mkdir "${GITHUB_WORKSPACE}" 2023-05-06T10:17:38.4109978Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:17:38.4110280Z env: 2023-05-06T10:17:38.4110497Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:17:38.4110722Z NO_SUDO: 2023-05-06T10:17:38.4110915Z ##[endgroup] 2023-05-06T10:17:38.4189877Z /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:17:41.6397363Z ##[group]Run malfet/checkout@silent-checkout 2023-05-06T10:17:41.6397707Z with: 2023-05-06T10:17:41.6397954Z ref: d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:17:41.6398194Z fetch-depth: 0 2023-05-06T10:17:41.6398421Z submodules: recursive 2023-05-06T10:17:41.6398665Z quiet-checkout: true 2023-05-06T10:17:41.6398906Z repository: pytorch/pytorch 2023-05-06T10:17:41.6399406Z token: *** 2023-05-06T10:17:41.6399622Z ssh-strict: true 2023-05-06T10:17:41.6399846Z persist-credentials: true 2023-05-06T10:17:41.6400083Z clean: true 2023-05-06T10:17:41.6400301Z lfs: false 2023-05-06T10:17:41.6400511Z set-safe-directory: true 2023-05-06T10:17:41.6400727Z env: 2023-05-06T10:17:41.6400987Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:17:41.6401194Z ##[endgroup] 2023-05-06T10:17:41.7917354Z Syncing repository: pytorch/pytorch 2023-05-06T10:17:41.7919153Z ##[group]Getting Git version info 2023-05-06T10:17:41.7919840Z Working directory is '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch' 2023-05-06T10:17:41.7920581Z Unexpected error attempting to determine if executable file exists '/home/weiwangmeta/.local/bin/git': Error: EACCES: permission denied, stat '/home/weiwangmeta/.local/bin/git' 2023-05-06T10:17:41.7921398Z Unexpected error attempting to determine if executable file exists '/home/weiwangmeta/.local/bin/git': Error: EACCES: permission denied, stat '/home/weiwangmeta/.local/bin/git' 2023-05-06T10:17:41.7922078Z [command]/usr/bin/git version 2023-05-06T10:17:41.7922315Z git version 2.25.1 2023-05-06T10:17:41.7933569Z ##[endgroup] 2023-05-06T10:17:41.7952708Z Temporarily overriding HOME='/home/weiwangmeta/actions-runner/_work/_temp/3b23ce48-0834-4978-98ca-dca7431428d4' before making global git config changes 2023-05-06T10:17:41.7953221Z Adding repository directory to the temporary git global config as a safe directory 2023-05-06T10:17:41.7958925Z [command]/usr/bin/git config --global --add safe.directory /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:17:41.8015142Z Deleting the contents of '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch' 2023-05-06T10:17:41.8021113Z ##[group]Initializing the repository 2023-05-06T10:17:41.8024286Z [command]/usr/bin/git init /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:17:41.8076241Z Initialized empty Git repository in /home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/ 2023-05-06T10:17:41.8085976Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2023-05-06T10:17:41.8133571Z ##[endgroup] 2023-05-06T10:17:41.8134312Z ##[group]Disabling automatic garbage collection 2023-05-06T10:17:41.8137787Z [command]/usr/bin/git config --local gc.auto 0 2023-05-06T10:17:41.8174415Z ##[endgroup] 2023-05-06T10:17:41.8174873Z ##[group]Setting up auth 2023-05-06T10:17:41.8183150Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2023-05-06T10:17:41.8221215Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2023-05-06T10:17:41.8484789Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2023-05-06T10:17:41.8523202Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2023-05-06T10:17:41.8778507Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2023-05-06T10:17:41.8825710Z ##[endgroup] 2023-05-06T10:17:41.8826239Z ##[group]Fetching the repository 2023-05-06T10:17:41.8833552Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --quiet --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2023-05-06T10:19:00.6601966Z [command]/usr/bin/git rev-parse --verify --quiet d719f0276d69a8315b65f4c4500cfc1cdaddb025^{object} 2023-05-06T10:19:00.6637126Z d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:19:00.6645261Z ##[endgroup] 2023-05-06T10:19:00.6645762Z ##[group]Determining the checkout info 2023-05-06T10:19:00.6646158Z ##[endgroup] 2023-05-06T10:19:00.6646590Z ##[group]Checking out the ref 2023-05-06T10:19:00.6648652Z [command]/usr/bin/git checkout --quiet --force d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:19:02.2664385Z ##[endgroup] 2023-05-06T10:19:02.2664950Z ##[group]Setting up auth for fetching submodules 2023-05-06T10:19:02.2671314Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2023-05-06T10:19:02.2730181Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2023-05-06T10:19:02.2766796Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2023-05-06T10:19:02.2805384Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2023-05-06T10:19:02.2838663Z ##[endgroup] 2023-05-06T10:19:02.2839434Z ##[group]Fetching submodules 2023-05-06T10:19:02.2844516Z [command]/usr/bin/git submodule sync --recursive 2023-05-06T10:19:02.3131133Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2023-05-06T10:19:02.3420321Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2023-05-06T10:19:02.3425761Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2023-05-06T10:19:02.3433870Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2023-05-06T10:19:02.3440419Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2023-05-06T10:19:02.3447041Z Submodule 'third_party/QNNPACK' (https://github.com/pytorch/QNNPACK) registered for path 'third_party/QNNPACK' 2023-05-06T10:19:02.3454228Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2023-05-06T10:19:02.3460466Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2023-05-06T10:19:02.3466633Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2023-05-06T10:19:02.3473618Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2023-05-06T10:19:02.3480302Z Submodule 'third_party/cub' (https://github.com/NVlabs/cub.git) registered for path 'third_party/cub' 2023-05-06T10:19:02.3487291Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2023-05-06T10:19:02.3494455Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2023-05-06T10:19:02.3501311Z Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' 2023-05-06T10:19:02.3508896Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2023-05-06T10:19:02.3516879Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2023-05-06T10:19:02.3523223Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2023-05-06T10:19:02.3530308Z Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' 2023-05-06T10:19:02.3537500Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:19:02.3544496Z Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' 2023-05-06T10:19:02.3552949Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2023-05-06T10:19:02.3561074Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2023-05-06T10:19:02.3568733Z Submodule 'third_party/ios-cmake' (https://github.com/Yangqing/ios-cmake.git) registered for path 'third_party/ios-cmake' 2023-05-06T10:19:02.3576483Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2023-05-06T10:19:02.3583998Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2023-05-06T10:19:02.3591638Z Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' 2023-05-06T10:19:02.3599272Z Submodule 'third_party/neon2sse' (https://github.com/intel/ARM_NEON_2_x86_SSE.git) registered for path 'third_party/neon2sse' 2023-05-06T10:19:02.3607233Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2023-05-06T10:19:02.3615925Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2023-05-06T10:19:02.3624404Z Submodule 'third_party/onnx-tensorrt' (https://github.com/onnx/onnx-tensorrt) registered for path 'third_party/onnx-tensorrt' 2023-05-06T10:19:02.3631358Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2023-05-06T10:19:02.3640070Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2023-05-06T10:19:02.3647920Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2023-05-06T10:19:02.3655578Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2023-05-06T10:19:02.3663822Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2023-05-06T10:19:02.3672187Z Submodule 'third_party/python-enum' (https://github.com/PeachPy/enum34.git) registered for path 'third_party/python-enum' 2023-05-06T10:19:02.3680585Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2023-05-06T10:19:02.3689215Z Submodule 'third_party/python-six' (https://github.com/benjaminp/six.git) registered for path 'third_party/python-six' 2023-05-06T10:19:02.3697352Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2023-05-06T10:19:02.3705890Z Submodule 'third_party/tbb' (https://github.com/01org/tbb) registered for path 'third_party/tbb' 2023-05-06T10:19:02.3715241Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2023-05-06T10:19:02.3724190Z Submodule 'third_party/zstd' (https://github.com/facebook/zstd.git) registered for path 'third_party/zstd' 2023-05-06T10:19:02.3806008Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2023-05-06T10:19:02.9089843Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2023-05-06T10:19:03.3229469Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2023-05-06T10:19:03.6947822Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2023-05-06T10:19:04.2152660Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/QNNPACK'... 2023-05-06T10:19:04.6959681Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2023-05-06T10:19:07.9561040Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2023-05-06T10:19:19.2373711Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2023-05-06T10:19:19.8980564Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2023-05-06T10:19:20.6686201Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/cub'... 2023-05-06T10:19:22.5003079Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2023-05-06T10:19:23.8742875Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2023-05-06T10:19:25.6817093Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/eigen'... 2023-05-06T10:19:31.1968762Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2023-05-06T10:19:32.4253903Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2023-05-06T10:19:34.6226506Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2023-05-06T10:19:36.3206447Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/foxi'... 2023-05-06T10:19:36.6963012Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2023-05-06T10:19:37.3912121Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2023-05-06T10:19:37.9517698Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2023-05-06T10:19:39.6162485Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2023-05-06T10:19:40.2471552Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/ios-cmake'... 2023-05-06T10:19:40.6419192Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2023-05-06T10:19:41.1634316Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2023-05-06T10:19:42.8690543Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/nccl/nccl'... 2023-05-06T10:19:43.4895797Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/neon2sse'... 2023-05-06T10:19:44.0381185Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2023-05-06T10:19:51.3831599Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2023-05-06T10:19:54.1884119Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt'... 2023-05-06T10:19:54.9407289Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2023-05-06T10:19:55.3674819Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2023-05-06T10:20:03.5716481Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2023-05-06T10:20:03.9192751Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2023-05-06T10:20:04.3764412Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2023-05-06T10:20:05.5438078Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/python-enum'... 2023-05-06T10:20:05.9623483Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2023-05-06T10:20:06.5361290Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/python-six'... 2023-05-06T10:20:07.0778531Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2023-05-06T10:20:07.9069110Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tbb'... 2023-05-06T10:20:10.2421042Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2023-05-06T10:20:10.9669777Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/zstd'... 2023-05-06T10:20:13.9589021Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2023-05-06T10:20:13.9970045Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2023-05-06T10:20:14.0325810Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2023-05-06T10:20:14.0847209Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2023-05-06T10:20:14.1365710Z Submodule path 'third_party/QNNPACK': checked out '7d2a4e9931a82adc3814275b6219a03e24e36b4c' 2023-05-06T10:20:14.2067765Z Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' 2023-05-06T10:20:14.9847389Z Submodule path 'third_party/XNNPACK': checked out '51a987591a6fc9f0fc0707077f53d763ac132cbf' 2023-05-06T10:20:15.0366292Z Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2023-05-06T10:20:15.1780657Z Submodule path 'third_party/cpuinfo': checked out '8ec7bd91ad0470e61cf38f618cc1f270dede599c' 2023-05-06T10:20:15.2442445Z Submodule path 'third_party/cub': checked out 'd106ddb991a56c3df1b6d51b2409e36ba8181ce4' 2023-05-06T10:20:15.5903370Z Submodule path 'third_party/cudnn_frontend': checked out 'e7f64390e9bb4a3db622ffe11c973834f572b609' 2023-05-06T10:20:16.1434271Z Submodule path 'third_party/cutlass': checked out '7c04f954151f606e60608061e891785fba229ae2' 2023-05-06T10:20:16.4476904Z Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' 2023-05-06T10:20:16.5299421Z Submodule path 'third_party/fbgemm': checked out 'e07dda2d50562f534abc682b0dc79a8cd5ce819d' 2023-05-06T10:20:16.5355315Z Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:20:16.5361036Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:20:16.5366357Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:20:16.5373074Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:20:16.5379004Z Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:20:16.5427485Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... 2023-05-06T10:20:17.7016933Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... 2023-05-06T10:20:18.4826602Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cutlass'... 2023-05-06T10:20:20.2890849Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... 2023-05-06T10:20:21.9755280Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... 2023-05-06T10:20:22.5479328Z Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' 2023-05-06T10:20:22.6880251Z Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' 2023-05-06T10:20:23.1599240Z Submodule path 'third_party/fbgemm/third_party/cutlass': checked out 'fc9ebc645b63f3a6bc80aaefde5c063fb72110d6' 2023-05-06T10:20:23.2521087Z Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' 2023-05-06T10:20:23.2848035Z Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '23f53b025b466d8ec3c45d52290d3442f7fbe6b1' 2023-05-06T10:20:23.4104022Z Submodule path 'third_party/flatbuffers': checked out 'd0cede9c90c5257537c293517a21376408b549fa' 2023-05-06T10:20:23.4739932Z Submodule path 'third_party/fmt': checked out 'a33701196adfad74917046096bf5a2aa0ab0bb50' 2023-05-06T10:20:23.5091036Z Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' 2023-05-06T10:20:23.5808402Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2023-05-06T10:20:23.6324889Z Submodule path 'third_party/gloo': checked out '10909297fedab0a680799211a299203e53515032' 2023-05-06T10:20:23.7105919Z Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2023-05-06T10:20:23.7495982Z Submodule path 'third_party/ideep': checked out 'fe8378249600442043b98f333b8b605bedca5a25' 2023-05-06T10:20:23.7547797Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2023-05-06T10:20:23.7592298Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2023-05-06T10:20:34.4750498Z Submodule path 'third_party/ideep/mkl-dnn': checked out '6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e' 2023-05-06T10:20:34.5129659Z Submodule path 'third_party/ios-cmake': checked out '8abaed637d56f1337d6e1d2c4026e25c1eade724' 2023-05-06T10:20:34.5548649Z Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' 2023-05-06T10:20:34.6812075Z Submodule path 'third_party/kineto': checked out '21beef3787b4134c43584f6c2443341921c41f69' 2023-05-06T10:20:34.6866309Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:20:34.6873871Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:20:34.6880831Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:20:34.6928179Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2023-05-06T10:20:35.4166194Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2023-05-06T10:20:37.1133121Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2023-05-06T10:20:38.8817513Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2023-05-06T10:20:38.8872760Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:20:38.8880259Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:20:38.8888271Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:20:38.8894494Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:20:38.8902587Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:20:38.8908887Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:20:38.8915390Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:20:38.8922151Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:20:38.8972414Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2023-05-06T10:20:40.1194797Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2023-05-06T10:20:40.7266608Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2023-05-06T10:20:42.4262719Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2023-05-06T10:20:43.0249304Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2023-05-06T10:20:43.7631865Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2023-05-06T10:20:45.4165191Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2023-05-06T10:20:52.7311315Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2023-05-06T10:20:53.3948120Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2023-05-06T10:20:53.4347233Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2023-05-06T10:20:53.4999014Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2023-05-06T10:20:53.5356092Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2023-05-06T10:20:53.5407561Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:20:53.5453990Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2023-05-06T10:20:54.1947287Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2023-05-06T10:20:54.2372386Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2023-05-06T10:20:54.3063500Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2023-05-06T10:20:54.4462394Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2023-05-06T10:20:54.4869692Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2023-05-06T10:20:54.5477312Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out 'a33701196adfad74917046096bf5a2aa0ab0bb50' 2023-05-06T10:20:54.6352170Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2023-05-06T10:20:54.6852568Z Submodule path 'third_party/nccl/nccl': checked out '9b7d5edbfcd70a1ab0874e27382d5c31f769aa0f' 2023-05-06T10:20:54.7270643Z Submodule path 'third_party/neon2sse': checked out '97a126f08ce318023be604d03f88bf0820a9464a' 2023-05-06T10:20:54.8698675Z Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' 2023-05-06T10:20:55.2123667Z Submodule path 'third_party/onnx': checked out '389b6bcb05b9479d149d29b2461fbffe8472ed14' 2023-05-06T10:20:55.2195817Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' 2023-05-06T10:20:55.2202305Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2023-05-06T10:20:55.2266928Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/benchmark'... 2023-05-06T10:20:55.8886372Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2023-05-06T10:20:57.0913905Z Submodule path 'third_party/onnx/third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2023-05-06T10:20:57.1532753Z Submodule path 'third_party/onnx/third_party/pybind11': checked out '914c06fb252b6cc3727d0eedab6736e88a3fcb01' 2023-05-06T10:20:57.1977527Z Submodule path 'third_party/onnx-tensorrt': checked out 'c153211418a7c57ce071d9ce2a41f8d1c85a878f' 2023-05-06T10:20:57.2028456Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:20:57.2072915Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx'... 2023-05-06T10:21:00.2283881Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx': checked out '765f5ee823a67a866f4bd28a9860e81f3c811ce8' 2023-05-06T10:21:00.2345367Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:00.2353114Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:00.2407345Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark'... 2023-05-06T10:21:00.8996412Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11'... 2023-05-06T10:21:02.1131866Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark': checked out 'e776aa0275e293707b6a0901e0e8d8a8a3679508' 2023-05-06T10:21:02.2117581Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11': checked out 'a1041190c8b8ff0cd9e2f0752248ad5e3789ea0c' 2023-05-06T10:21:02.2171074Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:02.2217086Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang'... 2023-05-06T10:21:02.6797358Z Submodule path 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2023-05-06T10:21:02.7153813Z Submodule path 'third_party/pocketfft': checked out 'ea778e37710c07723435b1be58235996d1d43a5a' 2023-05-06T10:21:03.0345757Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2023-05-06T10:21:03.0409439Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:03.0417216Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:03.0470441Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2023-05-06T10:21:03.7396942Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2023-05-06T10:21:05.4514055Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2023-05-06T10:21:05.5538768Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2023-05-06T10:21:05.5887773Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2023-05-06T10:21:05.6247093Z Submodule path 'third_party/pthreadpool': checked out 'a134dd5d4cee80cce15db81a72e7f929d71dd413' 2023-05-06T10:21:05.6874996Z Submodule path 'third_party/pybind11': checked out '80dc998efced8ceb2be59756668a7e90e8bef917' 2023-05-06T10:21:05.7240717Z Submodule path 'third_party/python-enum': checked out '4cfedc426c4e2fc52e3f5c2b4297e15ed8d6b8c7' 2023-05-06T10:21:05.7875856Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2023-05-06T10:21:05.8241257Z Submodule path 'third_party/python-six': checked out '15e31431af97e5e64b80af0a3f598d382bcdd49a' 2023-05-06T10:21:05.9070404Z Submodule path 'third_party/sleef': checked out 'e0a003ee838b75d11763aa9c3ef17bf71a725bff' 2023-05-06T10:21:06.0889859Z Submodule path 'third_party/tbb': checked out 'a51a90bc609bb73db8ea13841b5cf7aa4344d4a9' 2023-05-06T10:21:06.1450201Z Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' 2023-05-06T10:21:06.1505465Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:06.1512049Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:06.1519799Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:06.1527077Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:06.1576438Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2023-05-06T10:21:07.8286544Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2023-05-06T10:21:08.3277885Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2023-05-06T10:21:09.8545689Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2023-05-06T10:21:11.1039994Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2023-05-06T10:21:11.1430432Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2023-05-06T10:21:11.2413924Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' 2023-05-06T10:21:11.2953650Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2023-05-06T10:21:11.3011124Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:11.3058838Z Cloning into '/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2023-05-06T10:21:11.7742732Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2023-05-06T10:21:11.9573890Z Submodule path 'third_party/zstd': checked out 'aec56a52fbab207fc639a1937d1e708a282edca8' 2023-05-06T10:21:11.9673371Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2023-05-06T10:21:11.9970457Z Entering 'android/libs/fbjni' 2023-05-06T10:21:12.0013470Z Entering 'third_party/FP16' 2023-05-06T10:21:12.0055386Z Entering 'third_party/FXdiv' 2023-05-06T10:21:12.0097563Z Entering 'third_party/NNPACK' 2023-05-06T10:21:12.0140604Z Entering 'third_party/QNNPACK' 2023-05-06T10:21:12.0181466Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T10:21:12.0224105Z Entering 'third_party/XNNPACK' 2023-05-06T10:21:12.0282060Z Entering 'third_party/benchmark' 2023-05-06T10:21:12.0323441Z Entering 'third_party/cpuinfo' 2023-05-06T10:21:12.0364580Z Entering 'third_party/cub' 2023-05-06T10:21:12.0406763Z Entering 'third_party/cudnn_frontend' 2023-05-06T10:21:12.0454617Z Entering 'third_party/cutlass' 2023-05-06T10:21:12.0506698Z Entering 'third_party/eigen' 2023-05-06T10:21:12.0550703Z Entering 'third_party/fbgemm' 2023-05-06T10:21:12.0592019Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:21:12.0632702Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:21:12.0673302Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:21:12.0722306Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:21:12.0763733Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:21:12.0806097Z Entering 'third_party/flatbuffers' 2023-05-06T10:21:12.0850347Z Entering 'third_party/fmt' 2023-05-06T10:21:12.0892010Z Entering 'third_party/foxi' 2023-05-06T10:21:12.0933217Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:21:12.0975390Z Entering 'third_party/gloo' 2023-05-06T10:21:12.1017655Z Entering 'third_party/googletest' 2023-05-06T10:21:12.1059381Z Entering 'third_party/ideep' 2023-05-06T10:21:12.1099432Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T10:21:12.1150177Z Entering 'third_party/ios-cmake' 2023-05-06T10:21:12.1191626Z Entering 'third_party/ittapi' 2023-05-06T10:21:12.1232907Z Entering 'third_party/kineto' 2023-05-06T10:21:12.1273831Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:21:12.1313765Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:21:12.1355584Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:21:12.1395894Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:21:12.1435850Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:21:12.1474576Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:21:12.1517909Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:21:12.1558106Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:21:12.1600367Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:21:12.1641899Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:21:12.1684379Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:21:12.1725303Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:21:12.1767742Z Entering 'third_party/nccl/nccl' 2023-05-06T10:21:12.1809009Z Entering 'third_party/neon2sse' 2023-05-06T10:21:12.1850108Z Entering 'third_party/nlohmann' 2023-05-06T10:21:12.1893293Z Entering 'third_party/onnx' 2023-05-06T10:21:12.1954086Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.1995454Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.2041101Z Entering 'third_party/onnx-tensorrt' 2023-05-06T10:21:12.2081701Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:21:12.2140411Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.2181307Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.2220728Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:12.2268290Z Entering 'third_party/pocketfft' 2023-05-06T10:21:12.2308784Z Entering 'third_party/protobuf' 2023-05-06T10:21:12.2353226Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:12.2393724Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:12.2436279Z Entering 'third_party/psimd' 2023-05-06T10:21:12.2477597Z Entering 'third_party/pthreadpool' 2023-05-06T10:21:12.2519216Z Entering 'third_party/pybind11' 2023-05-06T10:21:12.2560199Z Entering 'third_party/python-enum' 2023-05-06T10:21:12.2602318Z Entering 'third_party/python-peachpy' 2023-05-06T10:21:12.2643511Z Entering 'third_party/python-six' 2023-05-06T10:21:12.2684589Z Entering 'third_party/sleef' 2023-05-06T10:21:12.2725499Z Entering 'third_party/tbb' 2023-05-06T10:21:12.2768287Z Entering 'third_party/tensorpipe' 2023-05-06T10:21:12.2810119Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:12.2850650Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:12.2891374Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:12.2932122Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:12.2972016Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:12.3016849Z Entering 'third_party/zstd' 2023-05-06T10:21:12.3075739Z ##[endgroup] 2023-05-06T10:21:12.3076214Z ##[group]Persisting credentials for submodules 2023-05-06T10:21:12.3085906Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || : 2023-05-06T10:21:12.3370230Z Entering 'android/libs/fbjni' 2023-05-06T10:21:12.3410296Z Entering 'third_party/FP16' 2023-05-06T10:21:12.3450161Z Entering 'third_party/FXdiv' 2023-05-06T10:21:12.3489311Z Entering 'third_party/NNPACK' 2023-05-06T10:21:12.3528722Z Entering 'third_party/QNNPACK' 2023-05-06T10:21:12.3568335Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T10:21:12.3607622Z Entering 'third_party/XNNPACK' 2023-05-06T10:21:12.3662303Z Entering 'third_party/benchmark' 2023-05-06T10:21:12.3702889Z Entering 'third_party/cpuinfo' 2023-05-06T10:21:12.3743522Z Entering 'third_party/cub' 2023-05-06T10:21:12.3784363Z Entering 'third_party/cudnn_frontend' 2023-05-06T10:21:12.3829154Z Entering 'third_party/cutlass' 2023-05-06T10:21:12.3877956Z Entering 'third_party/eigen' 2023-05-06T10:21:12.3920479Z Entering 'third_party/fbgemm' 2023-05-06T10:21:12.3959966Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:21:12.3998554Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:21:12.4037795Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:21:12.4086978Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:21:12.4125692Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:21:12.4167119Z Entering 'third_party/flatbuffers' 2023-05-06T10:21:12.4208055Z Entering 'third_party/fmt' 2023-05-06T10:21:12.4248414Z Entering 'third_party/foxi' 2023-05-06T10:21:12.4286552Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:21:12.4327484Z Entering 'third_party/gloo' 2023-05-06T10:21:12.4366214Z Entering 'third_party/googletest' 2023-05-06T10:21:12.4406613Z Entering 'third_party/ideep' 2023-05-06T10:21:12.4444201Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T10:21:12.4492375Z Entering 'third_party/ios-cmake' 2023-05-06T10:21:12.4531413Z Entering 'third_party/ittapi' 2023-05-06T10:21:12.4569921Z Entering 'third_party/kineto' 2023-05-06T10:21:12.4608401Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:21:12.4646363Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:21:12.4686669Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:21:12.4725390Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:21:12.4763913Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:21:12.4800839Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:21:12.4842446Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:21:12.4880376Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:21:12.4919848Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:21:12.4959060Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:21:12.4999437Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:21:12.5038188Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:21:12.5079093Z Entering 'third_party/nccl/nccl' 2023-05-06T10:21:12.5117497Z Entering 'third_party/neon2sse' 2023-05-06T10:21:12.5155876Z Entering 'third_party/nlohmann' 2023-05-06T10:21:12.5196789Z Entering 'third_party/onnx' 2023-05-06T10:21:12.5252489Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.5292569Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.5333893Z Entering 'third_party/onnx-tensorrt' 2023-05-06T10:21:12.5371903Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:21:12.5415738Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.5453603Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.5491416Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:12.5536914Z Entering 'third_party/pocketfft' 2023-05-06T10:21:12.5575601Z Entering 'third_party/protobuf' 2023-05-06T10:21:12.5619961Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:12.5659575Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:12.5700730Z Entering 'third_party/psimd' 2023-05-06T10:21:12.5739605Z Entering 'third_party/pthreadpool' 2023-05-06T10:21:12.5778707Z Entering 'third_party/pybind11' 2023-05-06T10:21:12.5818063Z Entering 'third_party/python-enum' 2023-05-06T10:21:12.5856815Z Entering 'third_party/python-peachpy' 2023-05-06T10:21:12.5895511Z Entering 'third_party/python-six' 2023-05-06T10:21:12.5935046Z Entering 'third_party/sleef' 2023-05-06T10:21:12.5974878Z Entering 'third_party/tbb' 2023-05-06T10:21:12.6015513Z Entering 'third_party/tensorpipe' 2023-05-06T10:21:12.6053665Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:12.6091953Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:12.6128617Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:12.6166097Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:12.6201834Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:12.6243291Z Entering 'third_party/zstd' 2023-05-06T10:21:12.6296999Z [command]/usr/bin/git submodule foreach --recursive git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url 2023-05-06T10:21:12.6578279Z Entering 'android/libs/fbjni' 2023-05-06T10:21:12.6614717Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2023-05-06T10:21:12.6632752Z Entering 'third_party/FP16' 2023-05-06T10:21:12.6669288Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2023-05-06T10:21:12.6688771Z Entering 'third_party/FXdiv' 2023-05-06T10:21:12.6724300Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2023-05-06T10:21:12.6743343Z Entering 'third_party/NNPACK' 2023-05-06T10:21:12.6779329Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2023-05-06T10:21:12.6798297Z Entering 'third_party/QNNPACK' 2023-05-06T10:21:12.6834902Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/QNNPACK/config remote.origin.url 2023-05-06T10:21:12.6853892Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T10:21:12.6889924Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2023-05-06T10:21:12.6908423Z Entering 'third_party/XNNPACK' 2023-05-06T10:21:12.6944929Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2023-05-06T10:21:12.6979554Z Entering 'third_party/benchmark' 2023-05-06T10:21:12.7016691Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2023-05-06T10:21:12.7035386Z Entering 'third_party/cpuinfo' 2023-05-06T10:21:12.7072009Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2023-05-06T10:21:12.7092421Z Entering 'third_party/cub' 2023-05-06T10:21:12.7129140Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cub/config remote.origin.url 2023-05-06T10:21:12.7148104Z Entering 'third_party/cudnn_frontend' 2023-05-06T10:21:12.7185960Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2023-05-06T10:21:12.7211800Z Entering 'third_party/cutlass' 2023-05-06T10:21:12.7250432Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2023-05-06T10:21:12.7279465Z Entering 'third_party/eigen' 2023-05-06T10:21:12.7317522Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/eigen/config remote.origin.url 2023-05-06T10:21:12.7339009Z Entering 'third_party/fbgemm' 2023-05-06T10:21:12.7378461Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2023-05-06T10:21:12.7397240Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:21:12.7433709Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/asmjit/config remote.origin.url 2023-05-06T10:21:12.7452723Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:21:12.7490543Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cpuinfo/config remote.origin.url 2023-05-06T10:21:12.7508954Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:21:12.7545368Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cutlass/config remote.origin.url 2023-05-06T10:21:12.7573253Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:21:12.7608669Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/googletest/config remote.origin.url 2023-05-06T10:21:12.7626829Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:21:12.7662977Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/hipify_torch/config remote.origin.url 2023-05-06T10:21:12.7683033Z Entering 'third_party/flatbuffers' 2023-05-06T10:21:12.7719256Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2023-05-06T10:21:12.7740162Z Entering 'third_party/fmt' 2023-05-06T10:21:12.7777927Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2023-05-06T10:21:12.7797611Z Entering 'third_party/foxi' 2023-05-06T10:21:12.7833640Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/foxi/config remote.origin.url 2023-05-06T10:21:12.7854705Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:21:12.7891862Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2023-05-06T10:21:12.7911089Z Entering 'third_party/gloo' 2023-05-06T10:21:12.7948130Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2023-05-06T10:21:12.7968429Z Entering 'third_party/googletest' 2023-05-06T10:21:12.8004830Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2023-05-06T10:21:12.8024162Z Entering 'third_party/ideep' 2023-05-06T10:21:12.8060236Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2023-05-06T10:21:12.8078186Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T10:21:12.8114596Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2023-05-06T10:21:12.8143820Z Entering 'third_party/ios-cmake' 2023-05-06T10:21:12.8181340Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ios-cmake/config remote.origin.url 2023-05-06T10:21:12.8200515Z Entering 'third_party/ittapi' 2023-05-06T10:21:12.8236041Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2023-05-06T10:21:12.8255228Z Entering 'third_party/kineto' 2023-05-06T10:21:12.8291254Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2023-05-06T10:21:12.8310478Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:21:12.8346814Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2023-05-06T10:21:12.8364976Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:21:12.8402680Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2023-05-06T10:21:12.8422734Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:21:12.8458850Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2023-05-06T10:21:12.8477619Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:21:12.8513430Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2023-05-06T10:21:12.8532721Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:21:12.8569053Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2023-05-06T10:21:12.8585392Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:21:12.8622724Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2023-05-06T10:21:12.8643997Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:21:12.8679979Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2023-05-06T10:21:12.8699931Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:21:12.8735574Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2023-05-06T10:21:12.8755760Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:21:12.8791841Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2023-05-06T10:21:12.8811987Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:21:12.8848009Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2023-05-06T10:21:12.8868417Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:21:12.8904062Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2023-05-06T10:21:12.8923486Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:21:12.8959674Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2023-05-06T10:21:12.8980512Z Entering 'third_party/nccl/nccl' 2023-05-06T10:21:12.9017751Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nccl/nccl/config remote.origin.url 2023-05-06T10:21:12.9038480Z Entering 'third_party/neon2sse' 2023-05-06T10:21:12.9073736Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/neon2sse/config remote.origin.url 2023-05-06T10:21:12.9093672Z Entering 'third_party/nlohmann' 2023-05-06T10:21:12.9129986Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2023-05-06T10:21:12.9151198Z Entering 'third_party/onnx' 2023-05-06T10:21:12.9187925Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2023-05-06T10:21:12.9225242Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.9260756Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2023-05-06T10:21:12.9281361Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.9316833Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2023-05-06T10:21:12.9338834Z Entering 'third_party/onnx-tensorrt' 2023-05-06T10:21:12.9376285Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/config remote.origin.url 2023-05-06T10:21:12.9394065Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:21:12.9430866Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/config remote.origin.url 2023-05-06T10:21:12.9454944Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:12.9492204Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2023-05-06T10:21:12.9510153Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:12.9546522Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2023-05-06T10:21:12.9564000Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:12.9600645Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx-tensorrt/modules/third_party/onnx/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2023-05-06T10:21:12.9625888Z Entering 'third_party/pocketfft' 2023-05-06T10:21:12.9662943Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2023-05-06T10:21:12.9682012Z Entering 'third_party/protobuf' 2023-05-06T10:21:12.9718226Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2023-05-06T10:21:12.9740077Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:12.9777238Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2023-05-06T10:21:12.9796070Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:12.9832119Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2023-05-06T10:21:12.9853849Z Entering 'third_party/psimd' 2023-05-06T10:21:12.9889721Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2023-05-06T10:21:12.9908132Z Entering 'third_party/pthreadpool' 2023-05-06T10:21:12.9944708Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2023-05-06T10:21:12.9964061Z Entering 'third_party/pybind11' 2023-05-06T10:21:13.0000138Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2023-05-06T10:21:13.0019874Z Entering 'third_party/python-enum' 2023-05-06T10:21:13.0055571Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-enum/config remote.origin.url 2023-05-06T10:21:13.0073879Z Entering 'third_party/python-peachpy' 2023-05-06T10:21:13.0110167Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2023-05-06T10:21:13.0129454Z Entering 'third_party/python-six' 2023-05-06T10:21:13.0165901Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-six/config remote.origin.url 2023-05-06T10:21:13.0184082Z Entering 'third_party/sleef' 2023-05-06T10:21:13.0221281Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2023-05-06T10:21:13.0240727Z Entering 'third_party/tbb' 2023-05-06T10:21:13.0276907Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tbb/config remote.origin.url 2023-05-06T10:21:13.0298264Z Entering 'third_party/tensorpipe' 2023-05-06T10:21:13.0335870Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2023-05-06T10:21:13.0353907Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:13.0393969Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2023-05-06T10:21:13.0412194Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:13.0447683Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2023-05-06T10:21:13.0464700Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:13.0499620Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2023-05-06T10:21:13.0517402Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:13.0553466Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2023-05-06T10:21:13.0570590Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:13.0607674Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2023-05-06T10:21:13.0630454Z Entering 'third_party/zstd' 2023-05-06T10:21:13.0666238Z file:/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/zstd/config remote.origin.url 2023-05-06T10:21:13.1045631Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2023-05-06T10:21:13.1333601Z Entering 'android/libs/fbjni' 2023-05-06T10:21:13.1375138Z Entering 'third_party/FP16' 2023-05-06T10:21:13.1416043Z Entering 'third_party/FXdiv' 2023-05-06T10:21:13.1458148Z Entering 'third_party/NNPACK' 2023-05-06T10:21:13.1498939Z Entering 'third_party/QNNPACK' 2023-05-06T10:21:13.1540926Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T10:21:13.1582573Z Entering 'third_party/XNNPACK' 2023-05-06T10:21:13.1639442Z Entering 'third_party/benchmark' 2023-05-06T10:21:13.1680785Z Entering 'third_party/cpuinfo' 2023-05-06T10:21:13.1721864Z Entering 'third_party/cub' 2023-05-06T10:21:13.1763118Z Entering 'third_party/cudnn_frontend' 2023-05-06T10:21:13.1810322Z Entering 'third_party/cutlass' 2023-05-06T10:21:13.1860142Z Entering 'third_party/eigen' 2023-05-06T10:21:13.1903870Z Entering 'third_party/fbgemm' 2023-05-06T10:21:13.1946118Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:21:13.1986552Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:21:13.2026979Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:21:13.2075485Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:21:13.2115624Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:21:13.2157412Z Entering 'third_party/flatbuffers' 2023-05-06T10:21:13.2202752Z Entering 'third_party/fmt' 2023-05-06T10:21:13.2244181Z Entering 'third_party/foxi' 2023-05-06T10:21:13.2284158Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:21:13.2324676Z Entering 'third_party/gloo' 2023-05-06T10:21:13.2371311Z Entering 'third_party/googletest' 2023-05-06T10:21:13.2412281Z Entering 'third_party/ideep' 2023-05-06T10:21:13.2452792Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T10:21:13.2503209Z Entering 'third_party/ios-cmake' 2023-05-06T10:21:13.2544574Z Entering 'third_party/ittapi' 2023-05-06T10:21:13.2585545Z Entering 'third_party/kineto' 2023-05-06T10:21:13.2627396Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:21:13.2667846Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:21:13.2710878Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:21:13.2750471Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:21:13.2790521Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:21:13.2830121Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:21:13.2873573Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:21:13.2914832Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:21:13.2956216Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:21:13.2997377Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:21:13.3039323Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:21:13.3079991Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:21:13.3122836Z Entering 'third_party/nccl/nccl' 2023-05-06T10:21:13.3166183Z Entering 'third_party/neon2sse' 2023-05-06T10:21:13.3207612Z Entering 'third_party/nlohmann' 2023-05-06T10:21:13.3251169Z Entering 'third_party/onnx' 2023-05-06T10:21:13.3308745Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T10:21:13.3351263Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T10:21:13.3393892Z Entering 'third_party/onnx-tensorrt' 2023-05-06T10:21:13.3435079Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:21:13.3481056Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:13.3522078Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:13.3561127Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:13.3610030Z Entering 'third_party/pocketfft' 2023-05-06T10:21:13.3651266Z Entering 'third_party/protobuf' 2023-05-06T10:21:13.3696382Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:13.3737642Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:13.3780951Z Entering 'third_party/psimd' 2023-05-06T10:21:13.3823965Z Entering 'third_party/pthreadpool' 2023-05-06T10:21:13.3867698Z Entering 'third_party/pybind11' 2023-05-06T10:21:13.3912323Z Entering 'third_party/python-enum' 2023-05-06T10:21:13.3953105Z Entering 'third_party/python-peachpy' 2023-05-06T10:21:13.3994365Z Entering 'third_party/python-six' 2023-05-06T10:21:13.4034865Z Entering 'third_party/sleef' 2023-05-06T10:21:13.4075925Z Entering 'third_party/tbb' 2023-05-06T10:21:13.4118962Z Entering 'third_party/tensorpipe' 2023-05-06T10:21:13.4160348Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:13.4200536Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:13.4240333Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:13.4279939Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:13.4318959Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:13.4363931Z Entering 'third_party/zstd' 2023-05-06T10:21:13.4420179Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2023-05-06T10:21:13.4701242Z Entering 'android/libs/fbjni' 2023-05-06T10:21:13.4742722Z Entering 'third_party/FP16' 2023-05-06T10:21:13.4783635Z Entering 'third_party/FXdiv' 2023-05-06T10:21:13.4825717Z Entering 'third_party/NNPACK' 2023-05-06T10:21:13.4866435Z Entering 'third_party/QNNPACK' 2023-05-06T10:21:13.4907636Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T10:21:13.4948496Z Entering 'third_party/XNNPACK' 2023-05-06T10:21:13.5004793Z Entering 'third_party/benchmark' 2023-05-06T10:21:13.5046153Z Entering 'third_party/cpuinfo' 2023-05-06T10:21:13.5087383Z Entering 'third_party/cub' 2023-05-06T10:21:13.5128083Z Entering 'third_party/cudnn_frontend' 2023-05-06T10:21:13.5176411Z Entering 'third_party/cutlass' 2023-05-06T10:21:13.5225541Z Entering 'third_party/eigen' 2023-05-06T10:21:13.5268402Z Entering 'third_party/fbgemm' 2023-05-06T10:21:13.5310239Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T10:21:13.5349574Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T10:21:13.5389855Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T10:21:13.5437599Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T10:21:13.5477273Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T10:21:13.5518066Z Entering 'third_party/flatbuffers' 2023-05-06T10:21:13.5563027Z Entering 'third_party/fmt' 2023-05-06T10:21:13.5605181Z Entering 'third_party/foxi' 2023-05-06T10:21:13.5646173Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T10:21:13.5687614Z Entering 'third_party/gloo' 2023-05-06T10:21:13.5729375Z Entering 'third_party/googletest' 2023-05-06T10:21:13.5771179Z Entering 'third_party/ideep' 2023-05-06T10:21:13.5811668Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T10:21:13.5862459Z Entering 'third_party/ios-cmake' 2023-05-06T10:21:13.5904259Z Entering 'third_party/ittapi' 2023-05-06T10:21:13.5945585Z Entering 'third_party/kineto' 2023-05-06T10:21:13.5986277Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T10:21:13.6026085Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T10:21:13.6068698Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T10:21:13.6108742Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T10:21:13.6148104Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T10:21:13.6186715Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T10:21:13.6230662Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T10:21:13.6271910Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T10:21:13.6312723Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T10:21:13.6354250Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T10:21:13.6396992Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T10:21:13.6437726Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T10:21:13.6479996Z Entering 'third_party/nccl/nccl' 2023-05-06T10:21:13.6522485Z Entering 'third_party/neon2sse' 2023-05-06T10:21:13.6562878Z Entering 'third_party/nlohmann' 2023-05-06T10:21:13.6605587Z Entering 'third_party/onnx' 2023-05-06T10:21:13.6662271Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T10:21:13.6702818Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T10:21:13.6745320Z Entering 'third_party/onnx-tensorrt' 2023-05-06T10:21:13.6785669Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T10:21:13.6830209Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T10:21:13.6871270Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T10:21:13.6910826Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T10:21:13.6957012Z Entering 'third_party/pocketfft' 2023-05-06T10:21:13.6996794Z Entering 'third_party/protobuf' 2023-05-06T10:21:13.7041053Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T10:21:13.7081967Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T10:21:13.7123997Z Entering 'third_party/psimd' 2023-05-06T10:21:13.7164551Z Entering 'third_party/pthreadpool' 2023-05-06T10:21:13.7205743Z Entering 'third_party/pybind11' 2023-05-06T10:21:13.7245777Z Entering 'third_party/python-enum' 2023-05-06T10:21:13.7285967Z Entering 'third_party/python-peachpy' 2023-05-06T10:21:13.7328429Z Entering 'third_party/python-six' 2023-05-06T10:21:13.7370169Z Entering 'third_party/sleef' 2023-05-06T10:21:13.7410626Z Entering 'third_party/tbb' 2023-05-06T10:21:13.7452938Z Entering 'third_party/tensorpipe' 2023-05-06T10:21:13.7494693Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T10:21:13.7533827Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T10:21:13.7573947Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T10:21:13.7613569Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T10:21:13.7652316Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T10:21:13.7695749Z Entering 'third_party/zstd' 2023-05-06T10:21:13.7746894Z ##[endgroup] 2023-05-06T10:21:13.7811227Z [command]/usr/bin/git log -1 --format='%H' 2023-05-06T10:21:13.7847507Z 'd719f0276d69a8315b65f4c4500cfc1cdaddb025' 2023-05-06T10:21:13.8080762Z Prepare all required actions 2023-05-06T10:21:13.8081790Z Getting action download info 2023-05-06T10:21:14.0184708Z ##[group]Run ./.github/actions/setup-linux 2023-05-06T10:21:14.0185132Z env: 2023-05-06T10:21:14.0185502Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:14.0185903Z ##[endgroup] 2023-05-06T10:21:14.0261725Z ##[group]Run set -euo pipefail 2023-05-06T10:21:14.0262029Z set -euo pipefail 2023-05-06T10:21:14.0262290Z function get_ec2_metadata() { 2023-05-06T10:21:14.0262577Z  # Pulled from instance metadata endpoint for EC2 2023-05-06T10:21:14.0263036Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2023-05-06T10:21:14.0263402Z  category=$1 2023-05-06T10:21:14.0263695Z  # If it is GCP runner (runner name contains gcp), do not run this 2023-05-06T10:21:14.0264006Z  runner_name_str=gh-ci-gcp-a100-11 2023-05-06T10:21:14.0264357Z  if [[ $runner_name_str != *"gcp"* ]]; then 2023-05-06T10:21:14.0264665Z  curl -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2023-05-06T10:21:14.0264984Z  else 2023-05-06T10:21:14.0265270Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2023-05-06T10:21:14.0265553Z  fi 2023-05-06T10:21:14.0265730Z } 2023-05-06T10:21:14.0265977Z echo "ami-id: $(get_ec2_metadata ami-id)" 2023-05-06T10:21:14.0266287Z echo "instance-id: $(get_ec2_metadata instance-id)" 2023-05-06T10:21:14.0266608Z echo "instance-type: $(get_ec2_metadata instance-type)" 2023-05-06T10:21:14.0266915Z echo "system info $(uname -a)" 2023-05-06T10:21:14.0286637Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:21:14.0286969Z env: 2023-05-06T10:21:14.0287164Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:14.0287394Z ##[endgroup] 2023-05-06T10:21:14.0333862Z ami-id: Runner is from Google Cloud Platform, No info on ec2 metadata 2023-05-06T10:21:14.0339748Z instance-id: Runner is from Google Cloud Platform, No info on ec2 metadata 2023-05-06T10:21:14.0345423Z instance-type: Runner is from Google Cloud Platform, No info on ec2 metadata 2023-05-06T10:21:14.0356834Z system info Linux gh-ci-gcp-a100-11 5.15.0-1031-gcp #38~20.04.1-Ubuntu SMP Sun Mar 12 04:47:58 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux 2023-05-06T10:21:14.0374403Z ##[group]Run if systemctl is-active --quiet docker; then 2023-05-06T10:21:14.0374758Z if systemctl is-active --quiet docker; then 2023-05-06T10:21:14.0375053Z  echo "Docker daemon is running..."; 2023-05-06T10:21:14.0375311Z else 2023-05-06T10:21:14.0375580Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2023-05-06T10:21:14.0375853Z fi 2023-05-06T10:21:14.0392631Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:21:14.0392882Z env: 2023-05-06T10:21:14.0393088Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:14.0393329Z ##[endgroup] 2023-05-06T10:21:14.0462427Z Docker daemon is running... 2023-05-06T10:21:14.0496491Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2023-05-06T10:21:14.0496780Z with: 2023-05-06T10:21:14.0496963Z shell: bash 2023-05-06T10:21:14.0497182Z timeout_minutes: 5 2023-05-06T10:21:14.0497399Z max_attempts: 3 2023-05-06T10:21:14.0497601Z retry_wait_seconds: 30 2023-05-06T10:21:14.0498418Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login*** "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" 2023-05-06T10:21:14.0499208Z polling_interval_seconds: 1 2023-05-06T10:21:14.0499448Z warning_on_retry: true 2023-05-06T10:21:14.0499677Z continue_on_error: false 2023-05-06T10:21:14.0499872Z env: 2023-05-06T10:21:14.0500075Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:14.0500306Z AWS_RETRY_MODE: standard 2023-05-06T10:21:14.0500741Z AWS_MAX_ATTEMPTS: 5 2023-05-06T10:21:14.0500987Z AWS_DEFAULT_REGION: us-east-1 2023-05-06T10:21:14.0501217Z ##[endgroup] 2023-05-06T10:21:16.2808305Z WARNING! Your password will be stored unencrypted in /home/ubuntu/.docker/config.json. 2023-05-06T10:21:16.2808755Z Configure a credential helper to remove this warning. See 2023-05-06T10:21:16.2809646Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2023-05-06T10:21:16.2809892Z 2023-05-06T10:21:16.2816693Z Login Succeeded 2023-05-06T10:21:17.1150634Z Command completed after 1 attempt(s). 2023-05-06T10:21:17.1217981Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-05-06T10:21:17.1218378Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-05-06T10:21:17.1218705Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2023-05-06T10:21:17.1236538Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:21:17.1236985Z env: 2023-05-06T10:21:17.1237212Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:17.1237441Z ##[endgroup] 2023-05-06T10:21:17.1324975Z ##[group]Run set +e 2023-05-06T10:21:17.1325237Z set +e 2023-05-06T10:21:17.1325441Z set -x 2023-05-06T10:21:17.1325638Z  2023-05-06T10:21:17.1325863Z PT_DOMAIN=download.pytorch.org 2023-05-06T10:21:17.1326267Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2023-05-06T10:21:17.1326713Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2023-05-06T10:21:17.1327066Z # one is returned at random 2023-05-06T10:21:17.1327346Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2023-05-06T10:21:17.1327603Z  2023-05-06T10:21:17.1327826Z if [ -z "${RESOLVED_IP}" ]; then 2023-05-06T10:21:17.1328124Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2023-05-06T10:21:17.1328485Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2023-05-06T10:21:17.1328745Z  2023-05-06T10:21:17.1328970Z  if [ -z "${RESOLVED_IP}" ]; then 2023-05-06T10:21:17.1329250Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2023-05-06T10:21:17.1329511Z  exit 1 2023-05-06T10:21:17.1329713Z  fi 2023-05-06T10:21:17.1329888Z fi 2023-05-06T10:21:17.1330076Z  2023-05-06T10:21:17.1330318Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2023-05-06T10:21:17.1330587Z  # Clean up any old records first 2023-05-06T10:21:17.1330873Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2023-05-06T10:21:17.1331163Z fi 2023-05-06T10:21:17.1331334Z  2023-05-06T10:21:17.1331595Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2023-05-06T10:21:17.1331866Z cat /etc/hosts 2023-05-06T10:21:17.1348491Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:21:17.1348740Z env: 2023-05-06T10:21:17.1348954Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:17.1349176Z ##[endgroup] 2023-05-06T10:21:17.1386025Z + PT_DOMAIN=download.pytorch.org 2023-05-06T10:21:17.1393153Z ++ dig -4 +short download.pytorch.org 2023-05-06T10:21:17.1393448Z ++ tail -n1 2023-05-06T10:21:17.1502545Z + RESOLVED_IP=13.226.22.44 2023-05-06T10:21:17.1503050Z + '[' -z 13.226.22.44 ']' 2023-05-06T10:21:17.1503390Z + grep -r download.pytorch.org /etc/hosts 2023-05-06T10:21:17.1515670Z 13.226.22.44 download.pytorch.org 2023-05-06T10:21:17.1517361Z + sudo sed -i /download.pytorch.org/d /etc/hosts 2023-05-06T10:21:17.1632033Z + echo '13.226.22.44 download.pytorch.org' 2023-05-06T10:21:17.1632384Z + sudo tee -a /etc/hosts 2023-05-06T10:21:17.1714990Z 13.226.22.44 download.pytorch.org 2023-05-06T10:21:17.1723287Z + cat /etc/hosts 2023-05-06T10:21:17.1731763Z 127.0.0.1 localhost 2023-05-06T10:21:17.1731915Z 2023-05-06T10:21:17.1732471Z # The following lines are desirable for IPv6 capable hosts 2023-05-06T10:21:17.1747149Z ::1 ip6-localhost ip6-loopback 2023-05-06T10:21:17.1747518Z fe00::0 ip6-localnet 2023-05-06T10:21:17.1747799Z ff00::0 ip6-mcastprefix 2023-05-06T10:21:17.1748072Z ff02::1 ip6-allnodes 2023-05-06T10:21:17.1748343Z ff02::2 ip6-allrouters 2023-05-06T10:21:17.1748610Z ff02::3 ip6-allhosts 2023-05-06T10:21:17.1748868Z 169.254.169.254 metadata.google.internal metadata 2023-05-06T10:21:17.1749152Z 13.226.22.44 download.pytorch.org 2023-05-06T10:21:17.1784652Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2023-05-06T10:21:17.1784961Z with: 2023-05-06T10:21:17.1785425Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:21:17.1785837Z env: 2023-05-06T10:21:17.1786042Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:17.1786266Z ##[endgroup] 2023-05-06T10:21:17.1800771Z ##[group]Run retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2023-05-06T10:21:17.1801160Z retry () { "$@" || (sleep 1 && "$@") || (sleep 2 && "$@") } 2023-05-06T10:21:17.1801485Z # ignore output since only exit code is used for conditional 2023-05-06T10:21:17.1801827Z # only pull docker image if it's not available locally 2023-05-06T10:21:17.1802173Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2023-05-06T10:21:17.1802562Z  retry docker pull "${DOCKER_IMAGE}" 2023-05-06T10:21:17.1802785Z fi 2023-05-06T10:21:17.1820503Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:21:17.1820781Z env: 2023-05-06T10:21:17.1821045Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:21:17.1821506Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:21:17.1821940Z ##[endgroup] 2023-05-06T10:21:17.7555356Z 17ccb3e70b07f61f36d65de7b3f472733f27d9eb: Pulling from pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7 2023-05-06T10:21:17.7556198Z 456d651ccb27: Pulling fs layer 2023-05-06T10:21:17.7556505Z 2ea9ef18f7bc: Pulling fs layer 2023-05-06T10:21:17.7557059Z 94a24fc23c7c: Pulling fs layer 2023-05-06T10:21:17.7557483Z 2c4f28751241: Pulling fs layer 2023-05-06T10:21:17.7557765Z a9e5e49da50d: Pulling fs layer 2023-05-06T10:21:17.7558146Z 1aece8b4bd63: Pulling fs layer 2023-05-06T10:21:17.7558576Z 8030ee497010: Pulling fs layer 2023-05-06T10:21:17.7558960Z a41332b1bef3: Pulling fs layer 2023-05-06T10:21:17.7559367Z aed8824e0575: Pulling fs layer 2023-05-06T10:21:17.7559754Z 2c4f28751241: Waiting 2023-05-06T10:21:17.7560076Z ecf188d26601: Pulling fs layer 2023-05-06T10:21:17.7560432Z 8c74646e5134: Pulling fs layer 2023-05-06T10:21:17.7560793Z a9e5e49da50d: Waiting 2023-05-06T10:21:17.7561148Z 361cd14963a4: Pulling fs layer 2023-05-06T10:21:17.7561528Z 785c0ce62454: Pulling fs layer 2023-05-06T10:21:17.7561880Z 8030ee497010: Waiting 2023-05-06T10:21:17.7562232Z f60e0fad50ae: Pulling fs layer 2023-05-06T10:21:17.7562591Z a9a53207079d: Pulling fs layer 2023-05-06T10:21:17.7562933Z a41332b1bef3: Waiting 2023-05-06T10:21:17.7563268Z bbd967773990: Pulling fs layer 2023-05-06T10:21:17.7563650Z af757f6caada: Pulling fs layer 2023-05-06T10:21:17.7588893Z 66fe2d6256cd: Pulling fs layer 2023-05-06T10:21:17.7589432Z 84db7a1a0115: Pulling fs layer 2023-05-06T10:21:17.7589834Z 492e2f843417: Pulling fs layer 2023-05-06T10:21:17.7590282Z aed8824e0575: Waiting 2023-05-06T10:21:17.7590597Z 98d47d054712: Pulling fs layer 2023-05-06T10:21:17.7590977Z 361cd14963a4: Waiting 2023-05-06T10:21:17.7591677Z 2fc2deccd088: Pulling fs layer 2023-05-06T10:21:17.7591945Z 66fe2d6256cd: Waiting 2023-05-06T10:21:17.7592338Z 0d1eb173114d: Pulling fs layer 2023-05-06T10:21:17.7592660Z 777e3ba9d1c1: Pulling fs layer 2023-05-06T10:21:17.7592980Z 84db7a1a0115: Waiting 2023-05-06T10:21:17.7593357Z 1f1655ea70ca: Pulling fs layer 2023-05-06T10:21:17.7593737Z 492e2f843417: Waiting 2023-05-06T10:21:17.7594136Z 939f92768065: Pulling fs layer 2023-05-06T10:21:17.7594587Z 63919a7ae761: Pulling fs layer 2023-05-06T10:21:17.7594994Z 98d47d054712: Waiting 2023-05-06T10:21:17.7595338Z 0d1eb173114d: Waiting 2023-05-06T10:21:17.7595620Z 2fc2deccd088: Waiting 2023-05-06T10:21:17.7595844Z eb2b82dbdb91: Pulling fs layer 2023-05-06T10:21:17.7596130Z f60e0fad50ae: Waiting 2023-05-06T10:21:17.7596357Z e1ab30293ccd: Pulling fs layer 2023-05-06T10:21:17.7597101Z 25b1af63a2db: Pulling fs layer 2023-05-06T10:21:17.7597457Z bbd967773990: Waiting 2023-05-06T10:21:17.7597670Z 2c6c0d559c9c: Pulling fs layer 2023-05-06T10:21:17.7597896Z 777e3ba9d1c1: Waiting 2023-05-06T10:21:17.7598112Z 1f1655ea70ca: Waiting 2023-05-06T10:21:17.7598337Z e1ab30293ccd: Waiting 2023-05-06T10:21:17.7598540Z 785c0ce62454: Waiting 2023-05-06T10:21:17.7598741Z af757f6caada: Waiting 2023-05-06T10:21:17.7598930Z a9a53207079d: Waiting 2023-05-06T10:21:17.7599132Z 25b1af63a2db: Waiting 2023-05-06T10:21:17.7599334Z 8c74646e5134: Waiting 2023-05-06T10:21:17.7599517Z 2c6c0d559c9c: Waiting 2023-05-06T10:21:17.7599720Z 63919a7ae761: Waiting 2023-05-06T10:21:17.7599914Z 939f92768065: Waiting 2023-05-06T10:21:17.7600100Z eb2b82dbdb91: Waiting 2023-05-06T10:21:17.7600318Z b17a024d1a22: Pulling fs layer 2023-05-06T10:21:17.7600555Z 0734b97bc50e: Pulling fs layer 2023-05-06T10:21:17.7600773Z 7975d3970e17: Pulling fs layer 2023-05-06T10:21:17.7600984Z b17a024d1a22: Waiting 2023-05-06T10:21:17.7601184Z 0734b97bc50e: Waiting 2023-05-06T10:21:17.7601393Z 4ba73cbaa687: Pulling fs layer 2023-05-06T10:21:17.7601626Z 91131a686cb8: Pulling fs layer 2023-05-06T10:21:17.7601858Z 76aca39da895: Pulling fs layer 2023-05-06T10:21:17.7602087Z 6835aecdd332: Pulling fs layer 2023-05-06T10:21:17.7602317Z 1f319dc82804: Pulling fs layer 2023-05-06T10:21:17.7602549Z f214c8148de0: Pulling fs layer 2023-05-06T10:21:17.7602767Z 91d6a73af5c3: Pulling fs layer 2023-05-06T10:21:17.7603000Z d4ca02113f1b: Pulling fs layer 2023-05-06T10:21:17.7603264Z 26aa90a20f4f: Pulling fs layer 2023-05-06T10:21:17.7603480Z 7975d3970e17: Waiting 2023-05-06T10:21:17.7603672Z 4ba73cbaa687: Waiting 2023-05-06T10:21:17.7603897Z a5618a6b1eaa: Pulling fs layer 2023-05-06T10:21:17.7604114Z 91131a686cb8: Waiting 2023-05-06T10:21:17.7604318Z 2562aa6e5371: Pulling fs layer 2023-05-06T10:21:17.7604545Z f8cd3744a217: Pulling fs layer 2023-05-06T10:21:17.7604780Z 6c42e3e5156a: Pulling fs layer 2023-05-06T10:21:17.7604985Z 76aca39da895: Waiting 2023-05-06T10:21:17.7605208Z df5e763c8a37: Pulling fs layer 2023-05-06T10:21:17.7605440Z 4f47abf0f8af: Pulling fs layer 2023-05-06T10:21:17.7605661Z f407718a23c2: Pulling fs layer 2023-05-06T10:21:17.7605880Z 6835aecdd332: Waiting 2023-05-06T10:21:17.7606134Z 1f319dc82804: Waiting 2023-05-06T10:21:17.7606339Z b447a852c1a3: Pulling fs layer 2023-05-06T10:21:17.7606571Z 7e786aeb415d: Pulling fs layer 2023-05-06T10:21:17.7606802Z 6d8858dda34e: Pulling fs layer 2023-05-06T10:21:17.7607007Z f214c8148de0: Waiting 2023-05-06T10:21:17.7607214Z 91d6a73af5c3: Waiting 2023-05-06T10:21:17.7607437Z 9dd9d9996530: Pulling fs layer 2023-05-06T10:21:17.7607653Z ea1853196497: Pulling fs layer 2023-05-06T10:21:17.7607867Z d4ca02113f1b: Waiting 2023-05-06T10:21:17.7608090Z 0e5b4ecd9b1a: Pulling fs layer 2023-05-06T10:21:17.7608346Z 5c594fb0ff56: Pulling fs layer 2023-05-06T10:21:17.7608579Z 27528af53d56: Pulling fs layer 2023-05-06T10:21:17.7608810Z 4e0a1162b31a: Pulling fs layer 2023-05-06T10:21:17.7609015Z 26aa90a20f4f: Waiting 2023-05-06T10:21:17.7609224Z a5618a6b1eaa: Waiting 2023-05-06T10:21:17.7609436Z 2562aa6e5371: Waiting 2023-05-06T10:21:17.7609623Z 6d8858dda34e: Waiting 2023-05-06T10:21:17.7609824Z 9dd9d9996530: Waiting 2023-05-06T10:21:17.7610240Z f8cd3744a217: Waiting 2023-05-06T10:21:17.7610427Z 6c42e3e5156a: Waiting 2023-05-06T10:21:17.7610628Z ea1853196497: Waiting 2023-05-06T10:21:17.7610831Z 0e5b4ecd9b1a: Waiting 2023-05-06T10:21:17.7611023Z df5e763c8a37: Waiting 2023-05-06T10:21:17.7611226Z 5c594fb0ff56: Waiting 2023-05-06T10:21:17.7611431Z 4f47abf0f8af: Waiting 2023-05-06T10:21:17.7611621Z f407718a23c2: Waiting 2023-05-06T10:21:17.7611820Z 4e0a1162b31a: Waiting 2023-05-06T10:21:17.7612018Z 27528af53d56: Waiting 2023-05-06T10:21:17.7612202Z b447a852c1a3: Waiting 2023-05-06T10:21:18.3407855Z 2ea9ef18f7bc: Verifying Checksum 2023-05-06T10:21:18.3408382Z 2ea9ef18f7bc: Download complete 2023-05-06T10:21:18.5619038Z 456d651ccb27: Verifying Checksum 2023-05-06T10:21:18.5619416Z 456d651ccb27: Download complete 2023-05-06T10:21:18.6545853Z 2c4f28751241: Verifying Checksum 2023-05-06T10:21:18.6546815Z 2c4f28751241: Download complete 2023-05-06T10:21:18.8401510Z 94a24fc23c7c: Verifying Checksum 2023-05-06T10:21:18.8402075Z 94a24fc23c7c: Download complete 2023-05-06T10:21:18.8467007Z a9e5e49da50d: Verifying Checksum 2023-05-06T10:21:18.8467324Z a9e5e49da50d: Download complete 2023-05-06T10:21:19.1633858Z a41332b1bef3: Verifying Checksum 2023-05-06T10:21:19.1634378Z a41332b1bef3: Download complete 2023-05-06T10:21:19.1974765Z 8030ee497010: Download complete 2023-05-06T10:21:19.2151024Z 456d651ccb27: Pull complete 2023-05-06T10:21:19.4262584Z 2ea9ef18f7bc: Pull complete 2023-05-06T10:21:19.4632562Z aed8824e0575: Download complete 2023-05-06T10:21:19.8163210Z 8c74646e5134: Download complete 2023-05-06T10:21:20.1175291Z 361cd14963a4: Verifying Checksum 2023-05-06T10:21:20.1175858Z 361cd14963a4: Download complete 2023-05-06T10:21:20.4806701Z 94a24fc23c7c: Pull complete 2023-05-06T10:21:20.5411417Z 2c4f28751241: Pull complete 2023-05-06T10:21:20.5978088Z a9e5e49da50d: Pull complete 2023-05-06T10:21:22.7103899Z 785c0ce62454: Verifying Checksum 2023-05-06T10:21:22.7104409Z 785c0ce62454: Download complete 2023-05-06T10:21:23.0050504Z f60e0fad50ae: Verifying Checksum 2023-05-06T10:21:23.0051006Z f60e0fad50ae: Download complete 2023-05-06T10:21:23.3011630Z a9a53207079d: Verifying Checksum 2023-05-06T10:21:23.3012117Z a9a53207079d: Download complete 2023-05-06T10:21:23.5934266Z bbd967773990: Download complete 2023-05-06T10:21:24.7810040Z af757f6caada: Verifying Checksum 2023-05-06T10:21:24.7810359Z af757f6caada: Download complete 2023-05-06T10:21:25.0828426Z 66fe2d6256cd: Download complete 2023-05-06T10:21:25.3727075Z 84db7a1a0115: Verifying Checksum 2023-05-06T10:21:25.3727398Z 84db7a1a0115: Download complete 2023-05-06T10:21:25.6595446Z 492e2f843417: Verifying Checksum 2023-05-06T10:21:25.6595739Z 492e2f843417: Download complete 2023-05-06T10:21:33.0152464Z 1aece8b4bd63: Verifying Checksum 2023-05-06T10:21:33.0152975Z 1aece8b4bd63: Download complete 2023-05-06T10:21:33.3295040Z 2fc2deccd088: Verifying Checksum 2023-05-06T10:21:33.3295597Z 2fc2deccd088: Download complete 2023-05-06T10:21:33.6225226Z 0d1eb173114d: Verifying Checksum 2023-05-06T10:21:33.6225876Z 0d1eb173114d: Download complete 2023-05-06T10:21:33.9134707Z 777e3ba9d1c1: Verifying Checksum 2023-05-06T10:21:33.9135076Z 777e3ba9d1c1: Download complete 2023-05-06T10:21:34.5121024Z 939f92768065: Verifying Checksum 2023-05-06T10:21:34.5121512Z 939f92768065: Download complete 2023-05-06T10:21:35.9671597Z 63919a7ae761: Verifying Checksum 2023-05-06T10:21:35.9672055Z 63919a7ae761: Download complete 2023-05-06T10:21:36.2883046Z eb2b82dbdb91: Verifying Checksum 2023-05-06T10:21:36.2883580Z eb2b82dbdb91: Download complete 2023-05-06T10:21:36.5853512Z e1ab30293ccd: Verifying Checksum 2023-05-06T10:21:36.5854022Z e1ab30293ccd: Download complete 2023-05-06T10:21:37.0199979Z 25b1af63a2db: Verifying Checksum 2023-05-06T10:21:37.0200488Z 25b1af63a2db: Download complete 2023-05-06T10:21:37.2966621Z ecf188d26601: Verifying Checksum 2023-05-06T10:21:37.2967118Z ecf188d26601: Download complete 2023-05-06T10:21:37.3124355Z 2c6c0d559c9c: Download complete 2023-05-06T10:21:37.5957322Z b17a024d1a22: Verifying Checksum 2023-05-06T10:21:37.5958308Z b17a024d1a22: Download complete 2023-05-06T10:21:37.8745935Z 7975d3970e17: Download complete 2023-05-06T10:21:38.1650758Z 4ba73cbaa687: Verifying Checksum 2023-05-06T10:21:38.1651351Z 4ba73cbaa687: Download complete 2023-05-06T10:21:39.3154155Z 91131a686cb8: Verifying Checksum 2023-05-06T10:21:39.3154545Z 91131a686cb8: Download complete 2023-05-06T10:21:39.6244230Z 76aca39da895: Verifying Checksum 2023-05-06T10:21:39.6244727Z 76aca39da895: Download complete 2023-05-06T10:21:39.9129747Z 6835aecdd332: Verifying Checksum 2023-05-06T10:21:39.9130341Z 6835aecdd332: Download complete 2023-05-06T10:21:40.6173586Z 1f319dc82804: Verifying Checksum 2023-05-06T10:21:40.6174017Z 1f319dc82804: Download complete 2023-05-06T10:21:40.8883264Z f214c8148de0: Verifying Checksum 2023-05-06T10:21:40.8883781Z f214c8148de0: Download complete 2023-05-06T10:21:41.8800288Z 91d6a73af5c3: Verifying Checksum 2023-05-06T10:21:41.8800959Z 91d6a73af5c3: Download complete 2023-05-06T10:21:41.9540265Z 0734b97bc50e: Verifying Checksum 2023-05-06T10:21:41.9540697Z 0734b97bc50e: Download complete 2023-05-06T10:21:42.1704540Z d4ca02113f1b: Download complete 2023-05-06T10:21:42.2499277Z 26aa90a20f4f: Verifying Checksum 2023-05-06T10:21:42.2499757Z 26aa90a20f4f: Download complete 2023-05-06T10:21:42.5507610Z 2562aa6e5371: Download complete 2023-05-06T10:21:42.8205311Z 6c42e3e5156a: Verifying Checksum 2023-05-06T10:21:43.0955654Z df5e763c8a37: Verifying Checksum 2023-05-06T10:21:43.0956200Z df5e763c8a37: Download complete 2023-05-06T10:21:43.6601032Z 4f47abf0f8af: Verifying Checksum 2023-05-06T10:21:43.6601465Z 4f47abf0f8af: Download complete 2023-05-06T10:21:43.9494026Z f407718a23c2: Verifying Checksum 2023-05-06T10:21:43.9494545Z f407718a23c2: Download complete 2023-05-06T10:21:44.2412789Z b447a852c1a3: Verifying Checksum 2023-05-06T10:21:44.2413385Z b447a852c1a3: Download complete 2023-05-06T10:21:44.5414043Z 7e786aeb415d: Download complete 2023-05-06T10:21:44.8209447Z 6d8858dda34e: Verifying Checksum 2023-05-06T10:21:45.4665702Z 6d8858dda34e: Download complete 2023-05-06T10:21:45.4666001Z 9dd9d9996530: Verifying Checksum 2023-05-06T10:21:45.4666381Z 9dd9d9996530: Download complete 2023-05-06T10:21:45.7522862Z ea1853196497: Verifying Checksum 2023-05-06T10:21:45.7523301Z ea1853196497: Download complete 2023-05-06T10:21:46.8923461Z 0e5b4ecd9b1a: Verifying Checksum 2023-05-06T10:21:46.8923970Z 0e5b4ecd9b1a: Download complete 2023-05-06T10:21:47.2359973Z 5c594fb0ff56: Verifying Checksum 2023-05-06T10:21:47.2360413Z 5c594fb0ff56: Download complete 2023-05-06T10:21:48.0299438Z 98d47d054712: Verifying Checksum 2023-05-06T10:21:48.0299784Z 98d47d054712: Download complete 2023-05-06T10:21:48.3199619Z 4e0a1162b31a: Download complete 2023-05-06T10:21:51.2408104Z 1aece8b4bd63: Pull complete 2023-05-06T10:21:51.6691714Z 8030ee497010: Pull complete 2023-05-06T10:21:52.0407924Z a41332b1bef3: Pull complete 2023-05-06T10:21:52.5015411Z aed8824e0575: Pull complete 2023-05-06T10:21:53.7146833Z f8cd3744a217: Verifying Checksum 2023-05-06T10:21:53.7147399Z f8cd3744a217: Download complete 2023-05-06T10:22:17.5575833Z 27528af53d56: Verifying Checksum 2023-05-06T10:22:17.5576373Z 27528af53d56: Download complete 2023-05-06T10:22:18.3911929Z ecf188d26601: Pull complete 2023-05-06T10:22:18.4504260Z 8c74646e5134: Pull complete 2023-05-06T10:22:18.5032256Z 361cd14963a4: Pull complete 2023-05-06T10:22:23.5752020Z 785c0ce62454: Pull complete 2023-05-06T10:22:23.9368885Z f60e0fad50ae: Pull complete 2023-05-06T10:22:24.4134605Z a9a53207079d: Pull complete 2023-05-06T10:22:24.8851436Z bbd967773990: Pull complete 2023-05-06T10:22:26.9967472Z af757f6caada: Pull complete 2023-05-06T10:22:27.5048934Z 66fe2d6256cd: Pull complete 2023-05-06T10:22:27.9448011Z 84db7a1a0115: Pull complete 2023-05-06T10:22:28.3898985Z 492e2f843417: Pull complete 2023-05-06T10:23:02.2702421Z 98d47d054712: Pull complete 2023-05-06T10:23:02.3279919Z 2fc2deccd088: Pull complete 2023-05-06T10:23:02.3837338Z 0d1eb173114d: Pull complete 2023-05-06T10:23:02.4360686Z 777e3ba9d1c1: Pull complete 2023-05-06T10:23:02.4893602Z 1f1655ea70ca: Pull complete 2023-05-06T10:23:02.5401613Z 939f92768065: Pull complete 2023-05-06T10:23:05.2208490Z 63919a7ae761: Pull complete 2023-05-06T10:23:05.6736221Z eb2b82dbdb91: Pull complete 2023-05-06T10:23:06.0903152Z e1ab30293ccd: Pull complete 2023-05-06T10:23:06.3661549Z 25b1af63a2db: Pull complete 2023-05-06T10:23:06.7236990Z 2c6c0d559c9c: Pull complete 2023-05-06T10:23:07.1362939Z b17a024d1a22: Pull complete 2023-05-06T10:23:13.6886180Z 0734b97bc50e: Pull complete 2023-05-06T10:23:14.1289523Z 7975d3970e17: Pull complete 2023-05-06T10:23:14.5965414Z 4ba73cbaa687: Pull complete 2023-05-06T10:23:16.0292911Z 91131a686cb8: Pull complete 2023-05-06T10:23:16.0830692Z 76aca39da895: Pull complete 2023-05-06T10:23:16.1363137Z 6835aecdd332: Pull complete 2023-05-06T10:23:16.4610381Z 1f319dc82804: Pull complete 2023-05-06T10:23:16.5151928Z f214c8148de0: Pull complete 2023-05-06T10:23:17.7030249Z 91d6a73af5c3: Pull complete 2023-05-06T10:23:17.7555585Z d4ca02113f1b: Pull complete 2023-05-06T10:23:17.8067492Z 26aa90a20f4f: Pull complete 2023-05-06T10:23:17.9099227Z a5618a6b1eaa: Pull complete 2023-05-06T10:23:17.9613625Z 2562aa6e5371: Pull complete 2023-05-06T10:23:37.6484558Z f8cd3744a217: Pull complete 2023-05-06T10:23:37.7047935Z 6c42e3e5156a: Pull complete 2023-05-06T10:23:37.7575568Z df5e763c8a37: Pull complete 2023-05-06T10:23:37.9697641Z 4f47abf0f8af: Pull complete 2023-05-06T10:23:38.0217587Z f407718a23c2: Pull complete 2023-05-06T10:23:38.0748555Z b447a852c1a3: Pull complete 2023-05-06T10:23:38.1261697Z 7e786aeb415d: Pull complete 2023-05-06T10:23:38.1775581Z 6d8858dda34e: Pull complete 2023-05-06T10:23:38.7871756Z 9dd9d9996530: Pull complete 2023-05-06T10:23:38.8407086Z ea1853196497: Pull complete 2023-05-06T10:23:40.6148875Z 0e5b4ecd9b1a: Pull complete 2023-05-06T10:23:40.6662425Z 5c594fb0ff56: Pull complete 2023-05-06T10:24:16.6738007Z 27528af53d56: Pull complete 2023-05-06T10:24:16.7310331Z 4e0a1162b31a: Pull complete 2023-05-06T10:24:16.7419595Z Digest: sha256:4a48e0cd0a3dfdf3bf150da9befb0a7efa4b7d57c18bf4a7b3a0cfb6bc87bda9 2023-05-06T10:24:16.7461118Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:24:16.7491114Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:24:16.7647534Z ##[group]Run pytorch/test-infra/.github/actions/setup-nvidia@main 2023-05-06T10:24:16.7647835Z with: 2023-05-06T10:24:16.7648035Z driver-version: 525.105.17 2023-05-06T10:24:16.7648253Z env: 2023-05-06T10:24:16.7648462Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:16.7648672Z ##[endgroup] 2023-05-06T10:24:16.7681543Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2023-05-06T10:24:16.7681835Z with: 2023-05-06T10:24:16.7682025Z timeout_minutes: 10 2023-05-06T10:24:16.7682249Z max_attempts: 3 2023-05-06T10:24:16.7691468Z command: # Is it disgusting to have a full shell script here in this github action? Sure # But is it the best way to make it so that this action relies on nothing else? Absolutely set -eou pipefail DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID) DRIVER_FN="NVIDIA-Linux-x86_64-${DRIVER_VERSION}.run" YUM_REPO_URL="https://nvidia.github.io/nvidia-docker/${DISTRIBUTION}/nvidia-docker.repo" install_nvidia_docker2_amzn2() { ( set -x # Needed for yum-config-manager sudo yum install -y yum-utils sudo yum-config-manager --add-repo "${YUM_REPO_URL}" sudo yum install -y nvidia-docker2 sudo systemctl restart docker ) } install_nvidia_docker2_ubuntu20() { ( set -x sudo apt-get install -y nvidia-docker2 sudo systemctl restart docker ) } pre_install_nvidia_driver_amzn2() { ( # Purge any nvidia driver installed from RHEL repo sudo yum remove -y nvidia-driver-latest-dkms ) } install_nvidia_driver_common() { ( # Try to gather more information about the runner and its existing NVIDIA driver if any echo "Before installing NVIDIA driver" lspci lsmod modinfo nvidia || true HAS_NVIDIA_DRIVER=0 # Check if NVIDIA driver has already been installed if [ -x "$(command -v nvidia-smi)" ]; then set +e # The driver exists, check its version next. Also check only the first GPU if there are more than one of them # so that the same driver version is not print over multiple lines INSTALLED_DRIVER_VERSION=$(nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0) NVIDIA_SMI_STATUS=$? if [ "$NVIDIA_SMI_STATUS" -ne 0 ] && [ "$NVIDIA_SMI_STATUS" -ne 14 ]; then echo "Failed to get NVIDIA driver version ($INSTALLED_DRIVER_VERSION). Continuing" elif [ "$INSTALLED_DRIVER_VERSION" != "$DRIVER_VERSION" ]; then echo "NVIDIA driver ($INSTALLED_DRIVER_VERSION) has been installed, but we expect to have $DRIVER_VERSION instead. Continuing" else HAS_NVIDIA_DRIVER=1 echo "NVIDIA driver ($INSTALLED_DRIVER_VERSION) has already been installed. Skipping NVIDIA driver installation" fi set -e fi if [ "$HAS_NVIDIA_DRIVER" -eq 0 ]; then # CAUTION: this may need to be updated in future if [ "${DISTRIBUTION}" != ubuntu20.04 ]; then sudo yum groupinstall -y "Development Tools" # ensure our kernel install is the same as our underlying kernel, # groupinstall "Development Tools" has a habit of mismatching kernel headers sudo yum install -y "kernel-devel-uname-r == $(uname -r)" sudo modprobe backlight fi sudo curl -fsL -o /tmp/nvidia_driver "https://s3.amazonaws.com/ossci-linux/nvidia_driver/$DRIVER_FN" set +e sudo /bin/bash /tmp/nvidia_driver -s --no-drm NVIDIA_INSTALLATION_STATUS=$? RESET_GPU=0 if [ "$NVIDIA_INSTALLATION_STATUS" -ne 0 ]; then sudo cat /var/log/nvidia-installer.log # Fail to install NVIDIA driver, try to reset the GPU RESET_GPU=1 elif [ -x "$(command -v nvidia-smi)" ]; then # Check again if nvidia-smi works even if the driver installation completes successfully INSTALLED_DRIVER_VERSION=$(nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0) NVIDIA_SMI_STATUS=$? if [ "$NVIDIA_SMI_STATUS" -ne 0 ] && [ "$NVIDIA_SMI_STATUS" -ne 14 ]; then RESET_GPU=1 fi fi if [ "$RESET_GPU" -eq 1 ]; then NVIDIA_DEVICES=$(lspci -D | grep -i NVIDIA | cut -d' ' -f1) # The GPU can get stuck in a failure state if somehow the test crashs the GPU microcode. When this # happens, we'll try to reset all NVIDIA devices https://github.com/pytorch/pytorch/issues/88388 for PCI_ID in $NVIDIA_DEVICES; do DEVICE_ENABLED=$(cat /sys/bus/pci/devices/$PCI_ID/enable) echo "Reseting $PCI_ID (enabled state: $DEVICE_ENABLED)" # This requires sudo permission of course echo "1" | sudo tee /sys/bus/pci/devices/$PCI_ID/reset sleep 1 done fi sudo rm -fv /tmp/nvidia_driver set -e fi ) } post_install_nvidia_driver_common() { ( sudo modprobe nvidia || true echo "After installing NVIDIA driver" lspci lsmod modinfo nvidia || true ( set +e nvidia-smi # NB: Annoyingly, nvidia-smi command returns successfully with return code 0 even in # the case where the driver has already crashed as it still can get the driver version # and some basic information like the bus ID. However, the rest of the information # would be missing (ERR!), for example: # # +-----------------------------------------------------------------------------+ # | NVIDIA-SMI 525.89.02 Driver Version: 525.89.02 CUDA Version: 12.0 | # |-------------------------------+----------------------+----------------------+ # | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | # | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | # | | | MIG M. | # |===============================+======================+======================| # | 0 ERR! Off | 00000000:00:1E.0 Off | ERR! | # |ERR! ERR! ERR! ERR! / ERR! | 4184MiB / 23028MiB | ERR! Default | # | | | ERR! | # +-------------------------------+----------------------+----------------------+ # # +-----------------------------------------------------------------------------+ # | Processes: | # | GPU GI CI PID Type Process name GPU Memory | # | ID ID Usage | # |=============================================================================| # +-----------------------------------------------------------------------------+ # # This should be reported as a failure instead as it will guarantee to fail when # Docker tries to run with --gpus all # # So, the correct check here is to query one of the missing piece of info like # GPU name, so that the command can fail accordingly nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 NVIDIA_SMI_STATUS=$? # Allowable exit statuses for nvidia-smi, see: https://github.com/NVIDIA/gpu-operator/issues/285 if [ "$NVIDIA_SMI_STATUS" -eq 0 ] || [ "$NVIDIA_SMI_STATUS" -eq 14 ]; then echo "INFO: Ignoring allowed status ${NVIDIA_SMI_STATUS}" else echo "ERROR: nvidia-smi exited with unresolved status ${NVIDIA_SMI_STATUS}" exit ${NVIDIA_SMI_STATUS} fi set -e ) ) } install_nvidia_driver_amzn2() { ( set -x pre_install_nvidia_driver_amzn2 install_nvidia_driver_common post_install_nvidia_driver_common ) } install_nvidia_driver_ubuntu20() { ( set -x install_nvidia_driver_common post_install_nvidia_driver_common ) } echo "== Installing nvidia driver ${DRIVER_FN} ==" case "${DISTRIBUTION}" in amzn*) install_nvidia_driver_amzn2 ;; ubuntu20.04) install_nvidia_driver_ubuntu20 ;; *) echo "ERROR: Unknown distribution ${DISTRIBUTION}" exit 1 ;; esac # Install container toolkit based on distribution echo "== Installing nvidia container toolkit for ${DISTRIBUTION} ==" case "${DISTRIBUTION}" in amzn*) install_nvidia_docker2_amzn2 ;; ubuntu20.04) install_nvidia_docker2_ubuntu20 ;; *) echo "ERROR: Unknown distribution ${DISTRIBUTION}" exit 1 ;; esac echo "GPU_FLAG=--gpus all" >> "${GITHUB_ENV}" 2023-05-06T10:24:16.7700968Z retry_wait_seconds: 10 2023-05-06T10:24:16.7701205Z polling_interval_seconds: 1 2023-05-06T10:24:16.7701450Z warning_on_retry: true 2023-05-06T10:24:16.7701688Z continue_on_error: false 2023-05-06T10:24:16.7701890Z env: 2023-05-06T10:24:16.7702099Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:16.7702332Z DRIVER_VERSION: 525.105.17 2023-05-06T10:24:16.7702646Z ##[endgroup] 2023-05-06T10:24:16.8420356Z == Installing nvidia driver NVIDIA-Linux-x86_64-525.105.17.run == 2023-05-06T10:24:16.8421302Z + install_nvidia_driver_common 2023-05-06T10:24:16.8423154Z + echo 'Before installing NVIDIA driver' 2023-05-06T10:24:16.8423603Z + lspci 2023-05-06T10:24:16.8426290Z Before installing NVIDIA driver 2023-05-06T10:24:16.8541183Z 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 2023-05-06T10:24:16.8541801Z 00:01.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 03) 2023-05-06T10:24:16.8542329Z 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 2023-05-06T10:24:16.8543074Z 00:03.0 Non-VGA unclassified device: Red Hat, Inc. Virtio SCSI 2023-05-06T10:24:16.8543627Z 00:04.0 3D controller: NVIDIA Corporation Device 20b0 (rev a1) 2023-05-06T10:24:16.8544179Z 00:05.0 Ethernet controller: Red Hat, Inc. Virtio network device 2023-05-06T10:24:16.8544963Z 00:06.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG 2023-05-06T10:24:16.8545304Z + lsmod 2023-05-06T10:24:16.8568841Z Module Size Used by 2023-05-06T10:24:16.8569394Z btrfs 1536000 0 2023-05-06T10:24:16.8569770Z blake2b_generic 20480 0 2023-05-06T10:24:16.8570101Z xor 24576 1 btrfs 2023-05-06T10:24:16.8570505Z zstd_compress 225280 1 btrfs 2023-05-06T10:24:16.8571263Z raid6_pq 122880 1 btrfs 2023-05-06T10:24:16.8571633Z ufs 106496 0 2023-05-06T10:24:16.8571984Z msdos 20480 0 2023-05-06T10:24:16.8572354Z xfs 1753088 0 2023-05-06T10:24:16.8572667Z nvidia_modeset 1241088 0 2023-05-06T10:24:16.8573604Z nvidia_uvm 1388544 0 2023-05-06T10:24:16.8573987Z veth 32768 0 2023-05-06T10:24:16.8574339Z xt_conntrack 16384 1 2023-05-06T10:24:16.8574816Z xt_MASQUERADE 20480 1 2023-05-06T10:24:16.8575145Z xfrm_user 40960 1 2023-05-06T10:24:16.8575403Z xfrm_algo 16384 1 xfrm_user 2023-05-06T10:24:16.8575830Z xt_addrtype 16384 2 2023-05-06T10:24:16.8576205Z iptable_filter 16384 1 2023-05-06T10:24:16.8576610Z iptable_nat 16384 1 2023-05-06T10:24:16.8577107Z nf_nat 49152 2 iptable_nat,xt_MASQUERADE 2023-05-06T10:24:16.8577501Z bpfilter 16384 0 2023-05-06T10:24:16.8577880Z br_netfilter 28672 0 2023-05-06T10:24:16.8578790Z bridge 307200 1 br_netfilter 2023-05-06T10:24:16.8579256Z stp 16384 1 bridge 2023-05-06T10:24:16.8579647Z llc 16384 2 bridge,stp 2023-05-06T10:24:16.8579995Z aufs 270336 0 2023-05-06T10:24:16.8580377Z overlay 151552 0 2023-05-06T10:24:16.8580693Z nls_iso8859_1 16384 1 2023-05-06T10:24:16.8580946Z dm_multipath 40960 0 2023-05-06T10:24:16.8581171Z scsi_dh_rdac 16384 0 2023-05-06T10:24:16.8581376Z scsi_dh_emc 16384 0 2023-05-06T10:24:16.8581611Z scsi_dh_alua 20480 0 2023-05-06T10:24:16.8581876Z nvidia 56487936 2 nvidia_uvm,nvidia_modeset 2023-05-06T10:24:16.8582114Z binfmt_misc 24576 1 2023-05-06T10:24:16.8582341Z psmouse 180224 0 2023-05-06T10:24:16.8582566Z crct10dif_pclmul 16384 1 2023-05-06T10:24:16.8582780Z input_leds 16384 0 2023-05-06T10:24:16.8583006Z crc32_pclmul 16384 0 2023-05-06T10:24:16.8583233Z ghash_clmulni_intel 16384 0 2023-05-06T10:24:16.8583449Z virtio_net 61440 0 2023-05-06T10:24:16.8583683Z net_failover 20480 1 virtio_net 2023-05-06T10:24:16.8583917Z aesni_intel 376832 0 2023-05-06T10:24:16.8584145Z failover 16384 1 net_failover 2023-05-06T10:24:16.8584381Z serio_raw 20480 0 2023-05-06T10:24:16.8584671Z crypto_simd 16384 1 aesni_intel 2023-05-06T10:24:16.8584979Z cryptd 24576 2 crypto_simd,ghash_clmulni_intel 2023-05-06T10:24:16.8585455Z sch_fq_codel 24576 13 2023-05-06T10:24:16.8585685Z drm 618496 1 nvidia 2023-05-06T10:24:16.8585916Z efi_pstore 16384 0 2023-05-06T10:24:16.8586123Z virtio_rng 16384 0 2023-05-06T10:24:16.8586394Z ip_tables 32768 2 iptable_filter,iptable_nat 2023-05-06T10:24:16.8586785Z x_tables 53248 6 xt_conntrack,iptable_filter,xt_addrtype,ip_tables,iptable_nat,xt_MASQUERADE 2023-05-06T10:24:16.8587101Z autofs4 49152 2 2023-05-06T10:24:16.8587314Z + modinfo nvidia 2023-05-06T10:24:16.8588831Z filename: /lib/modules/5.15.0-1031-gcp/kernel/drivers/video/nvidia.ko 2023-05-06T10:24:16.8589367Z firmware: nvidia/525.105.17/gsp_tu10x.bin 2023-05-06T10:24:16.8589816Z firmware: nvidia/525.105.17/gsp_ad10x.bin 2023-05-06T10:24:16.8590368Z alias: char-major-195-* 2023-05-06T10:24:16.8590742Z version: 525.105.17 2023-05-06T10:24:16.8591084Z supported: external 2023-05-06T10:24:16.8591448Z license: NVIDIA 2023-05-06T10:24:16.8591877Z srcversion: E64DFEE541C869DE69061E6 2023-05-06T10:24:16.8592294Z alias: pci:v000010DEd*sv*sd*bc06sc80i00* 2023-05-06T10:24:16.8592718Z alias: pci:v000010DEd*sv*sd*bc03sc02i00* 2023-05-06T10:24:16.8592992Z alias: pci:v000010DEd*sv*sd*bc03sc00i00* 2023-05-06T10:24:16.8593222Z depends: drm 2023-05-06T10:24:16.8593445Z retpoline: Y 2023-05-06T10:24:16.8593665Z name: nvidia 2023-05-06T10:24:16.8594034Z vermagic: 5.15.0-1031-gcp SMP mod_unload modversions 2023-05-06T10:24:16.8594396Z parm: NvSwitchRegDwords:NvSwitch regkey (charp) 2023-05-06T10:24:16.8594756Z parm: NvSwitchBlacklist:NvSwitchBlacklist=uuid[,uuid...] (charp) 2023-05-06T10:24:16.8595244Z parm: NVreg_ResmanDebugLevel:int 2023-05-06T10:24:16.8595669Z parm: NVreg_RmLogonRC:int 2023-05-06T10:24:16.8595984Z parm: NVreg_ModifyDeviceFiles:int 2023-05-06T10:24:16.8596255Z parm: NVreg_DeviceFileUID:int 2023-05-06T10:24:16.8596518Z parm: NVreg_DeviceFileGID:int 2023-05-06T10:24:16.8597070Z parm: NVreg_DeviceFileMode:int 2023-05-06T10:24:16.8597405Z parm: NVreg_InitializeSystemMemoryAllocations:int 2023-05-06T10:24:16.8597819Z parm: NVreg_UsePageAttributeTable:int 2023-05-06T10:24:16.8598284Z parm: NVreg_EnablePCIeGen3:int 2023-05-06T10:24:16.8598953Z parm: NVreg_EnableMSI:int 2023-05-06T10:24:16.8599342Z parm: NVreg_TCEBypassMode:int 2023-05-06T10:24:16.8599625Z parm: NVreg_EnableStreamMemOPs:int 2023-05-06T10:24:16.8599947Z parm: NVreg_RestrictProfilingToAdminUsers:int 2023-05-06T10:24:16.8600286Z parm: NVreg_PreserveVideoMemoryAllocations:int 2023-05-06T10:24:16.8600625Z parm: NVreg_EnableS0ixPowerManagement:int 2023-05-06T10:24:16.8600996Z parm: NVreg_S0ixPowerManagementVideoMemoryThreshold:int 2023-05-06T10:24:16.8601351Z parm: NVreg_DynamicPowerManagement:int 2023-05-06T10:24:16.8601740Z parm: NVreg_DynamicPowerManagementVideoMemoryThreshold:int 2023-05-06T10:24:16.8602230Z parm: NVreg_EnableGpuFirmware:int 2023-05-06T10:24:16.8602525Z parm: NVreg_EnableGpuFirmwareLogs:int 2023-05-06T10:24:16.8602835Z parm: NVreg_OpenRmEnableUnsupportedGpus:int 2023-05-06T10:24:16.8603162Z parm: NVreg_EnableUserNUMAManagement:int 2023-05-06T10:24:16.8603459Z parm: NVreg_MemoryPoolSize:int 2023-05-06T10:24:16.8603724Z parm: NVreg_KMallocHeapMaxSize:int 2023-05-06T10:24:16.8604010Z parm: NVreg_VMallocHeapMaxSize:int 2023-05-06T10:24:16.8604286Z parm: NVreg_IgnoreMMIOCheck:int 2023-05-06T10:24:16.8604538Z parm: NVreg_NvLinkDisable:int 2023-05-06T10:24:16.8604848Z parm: NVreg_EnablePCIERelaxedOrderingMode:int 2023-05-06T10:24:16.8605243Z parm: NVreg_RegisterPCIDriver:int 2023-05-06T10:24:16.8605537Z parm: NVreg_EnableDbgBreakpoint:int 2023-05-06T10:24:16.8605808Z parm: NVreg_RegistryDwords:charp 2023-05-06T10:24:16.8606297Z parm: NVreg_RegistryDwordsPerDevice:charp 2023-05-06T10:24:16.8606575Z parm: NVreg_RmMsg:charp 2023-05-06T10:24:16.8606815Z parm: NVreg_GpuBlacklist:charp 2023-05-06T10:24:16.8607100Z parm: NVreg_TemporaryFilePath:charp 2023-05-06T10:24:16.8607381Z parm: NVreg_ExcludedGpus:charp 2023-05-06T10:24:16.8607651Z parm: NVreg_DmaRemapPeerMmio:int 2023-05-06T10:24:16.8607919Z parm: rm_firmware_active:charp 2023-05-06T10:24:16.8608165Z + HAS_NVIDIA_DRIVER=0 2023-05-06T10:24:16.8608511Z ++ command -v nvidia-smi 2023-05-06T10:24:16.8608808Z + '[' -x /usr/bin/nvidia-smi ']' 2023-05-06T10:24:16.8609031Z + set +e 2023-05-06T10:24:16.8609393Z ++ nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0 2023-05-06T10:24:16.8808385Z + INSTALLED_DRIVER_VERSION=525.105.17 2023-05-06T10:24:16.8808813Z + NVIDIA_SMI_STATUS=0 2023-05-06T10:24:16.8809349Z + '[' 0 -ne 0 ']' 2023-05-06T10:24:16.8809678Z + '[' 525.105.17 '!=' 525.105.17 ']' 2023-05-06T10:24:16.8809906Z + HAS_NVIDIA_DRIVER=1 2023-05-06T10:24:16.8810373Z + echo 'NVIDIA driver (525.105.17) has already been installed. Skipping NVIDIA driver installation' 2023-05-06T10:24:16.8810854Z + set -e 2023-05-06T10:24:16.8811276Z + '[' 1 -eq 0 ']' 2023-05-06T10:24:16.8811645Z NVIDIA driver (525.105.17) has already been installed. Skipping NVIDIA driver installation 2023-05-06T10:24:16.8811993Z + post_install_nvidia_driver_common 2023-05-06T10:24:16.8814640Z + sudo modprobe nvidia 2023-05-06T10:24:16.8937753Z + echo 'After installing NVIDIA driver' 2023-05-06T10:24:16.8938069Z After installing NVIDIA driver 2023-05-06T10:24:16.8938294Z + lspci 2023-05-06T10:24:16.9049270Z 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 2023-05-06T10:24:16.9049681Z 00:01.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 03) 2023-05-06T10:24:16.9050232Z 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 2023-05-06T10:24:16.9050755Z 00:03.0 Non-VGA unclassified device: Red Hat, Inc. Virtio SCSI 2023-05-06T10:24:16.9051087Z 00:04.0 3D controller: NVIDIA Corporation Device 20b0 (rev a1) 2023-05-06T10:24:16.9051423Z 00:05.0 Ethernet controller: Red Hat, Inc. Virtio network device 2023-05-06T10:24:16.9051738Z 00:06.0 Unclassified device [00ff]: Red Hat, Inc. Virtio RNG 2023-05-06T10:24:16.9051987Z + lsmod 2023-05-06T10:24:16.9074761Z Module Size Used by 2023-05-06T10:24:16.9075271Z btrfs 1536000 0 2023-05-06T10:24:16.9075552Z blake2b_generic 20480 0 2023-05-06T10:24:16.9075785Z xor 24576 1 btrfs 2023-05-06T10:24:16.9076109Z zstd_compress 225280 1 btrfs 2023-05-06T10:24:16.9076513Z raid6_pq 122880 1 btrfs 2023-05-06T10:24:16.9077160Z ufs 106496 0 2023-05-06T10:24:16.9077376Z msdos 20480 0 2023-05-06T10:24:16.9077577Z xfs 1753088 0 2023-05-06T10:24:16.9077802Z nvidia_modeset 1241088 0 2023-05-06T10:24:16.9078046Z nvidia_uvm 1388544 0 2023-05-06T10:24:16.9078249Z veth 32768 0 2023-05-06T10:24:16.9078467Z xt_conntrack 16384 1 2023-05-06T10:24:16.9078693Z xt_MASQUERADE 20480 1 2023-05-06T10:24:16.9078901Z xfrm_user 40960 1 2023-05-06T10:24:16.9079131Z xfrm_algo 16384 1 xfrm_user 2023-05-06T10:24:16.9079372Z xt_addrtype 16384 2 2023-05-06T10:24:16.9079585Z iptable_filter 16384 1 2023-05-06T10:24:16.9079809Z iptable_nat 16384 1 2023-05-06T10:24:16.9080070Z nf_nat 49152 2 iptable_nat,xt_MASQUERADE 2023-05-06T10:24:16.9080312Z bpfilter 16384 0 2023-05-06T10:24:16.9080533Z br_netfilter 28672 0 2023-05-06T10:24:16.9080772Z bridge 307200 1 br_netfilter 2023-05-06T10:24:16.9080997Z stp 16384 1 bridge 2023-05-06T10:24:16.9081230Z llc 16384 2 bridge,stp 2023-05-06T10:24:16.9081456Z aufs 270336 0 2023-05-06T10:24:16.9081878Z overlay 151552 0 2023-05-06T10:24:16.9082095Z nls_iso8859_1 16384 1 2023-05-06T10:24:16.9082317Z dm_multipath 40960 0 2023-05-06T10:24:16.9082528Z scsi_dh_rdac 16384 0 2023-05-06T10:24:16.9082746Z scsi_dh_emc 16384 0 2023-05-06T10:24:16.9082967Z scsi_dh_alua 20480 0 2023-05-06T10:24:16.9083239Z nvidia 56487936 2 nvidia_uvm,nvidia_modeset 2023-05-06T10:24:16.9083480Z binfmt_misc 24576 1 2023-05-06T10:24:16.9083702Z psmouse 180224 0 2023-05-06T10:24:16.9083932Z crct10dif_pclmul 16384 1 2023-05-06T10:24:16.9084142Z input_leds 16384 0 2023-05-06T10:24:16.9084362Z crc32_pclmul 16384 0 2023-05-06T10:24:16.9084589Z ghash_clmulni_intel 16384 0 2023-05-06T10:24:16.9084802Z virtio_net 61440 0 2023-05-06T10:24:16.9085038Z net_failover 20480 1 virtio_net 2023-05-06T10:24:16.9085346Z aesni_intel 376832 0 2023-05-06T10:24:16.9085580Z failover 16384 1 net_failover 2023-05-06T10:24:16.9085818Z serio_raw 20480 0 2023-05-06T10:24:16.9086056Z crypto_simd 16384 1 aesni_intel 2023-05-06T10:24:16.9086324Z cryptd 24576 2 crypto_simd,ghash_clmulni_intel 2023-05-06T10:24:16.9086591Z sch_fq_codel 24576 13 2023-05-06T10:24:16.9086825Z drm 618496 1 nvidia 2023-05-06T10:24:16.9087037Z efi_pstore 16384 0 2023-05-06T10:24:16.9087257Z virtio_rng 16384 0 2023-05-06T10:24:16.9087525Z ip_tables 32768 2 iptable_filter,iptable_nat 2023-05-06T10:24:16.9087904Z x_tables 53248 6 xt_conntrack,iptable_filter,xt_addrtype,ip_tables,iptable_nat,xt_MASQUERADE 2023-05-06T10:24:16.9088216Z autofs4 49152 2 2023-05-06T10:24:16.9088443Z + modinfo nvidia 2023-05-06T10:24:16.9090882Z filename: /lib/modules/5.15.0-1031-gcp/kernel/drivers/video/nvidia.ko 2023-05-06T10:24:16.9091432Z firmware: nvidia/525.105.17/gsp_tu10x.bin 2023-05-06T10:24:16.9091845Z firmware: nvidia/525.105.17/gsp_ad10x.bin 2023-05-06T10:24:16.9092363Z alias: char-major-195-* 2023-05-06T10:24:16.9092601Z version: 525.105.17 2023-05-06T10:24:16.9092814Z supported: external 2023-05-06T10:24:16.9093035Z license: NVIDIA 2023-05-06T10:24:16.9093277Z srcversion: E64DFEE541C869DE69061E6 2023-05-06T10:24:16.9093762Z alias: pci:v000010DEd*sv*sd*bc06sc80i00* 2023-05-06T10:24:16.9094044Z alias: pci:v000010DEd*sv*sd*bc03sc02i00* 2023-05-06T10:24:16.9094315Z alias: pci:v000010DEd*sv*sd*bc03sc00i00* 2023-05-06T10:24:16.9094540Z depends: drm 2023-05-06T10:24:16.9094761Z retpoline: Y 2023-05-06T10:24:16.9094976Z name: nvidia 2023-05-06T10:24:16.9095543Z vermagic: 5.15.0-1031-gcp SMP mod_unload modversions 2023-05-06T10:24:16.9095912Z parm: NvSwitchRegDwords:NvSwitch regkey (charp) 2023-05-06T10:24:16.9096409Z parm: NvSwitchBlacklist:NvSwitchBlacklist=uuid[,uuid...] (charp) 2023-05-06T10:24:16.9096731Z parm: NVreg_ResmanDebugLevel:int 2023-05-06T10:24:16.9097016Z parm: NVreg_RmLogonRC:int 2023-05-06T10:24:16.9097281Z parm: NVreg_ModifyDeviceFiles:int 2023-05-06T10:24:16.9097548Z parm: NVreg_DeviceFileUID:int 2023-05-06T10:24:16.9097794Z parm: NVreg_DeviceFileGID:int 2023-05-06T10:24:16.9098069Z parm: NVreg_DeviceFileMode:int 2023-05-06T10:24:16.9098393Z parm: NVreg_InitializeSystemMemoryAllocations:int 2023-05-06T10:24:16.9098717Z parm: NVreg_UsePageAttributeTable:int 2023-05-06T10:24:16.9099002Z parm: NVreg_EnablePCIeGen3:int 2023-05-06T10:24:16.9099257Z parm: NVreg_EnableMSI:int 2023-05-06T10:24:16.9099498Z parm: NVreg_TCEBypassMode:int 2023-05-06T10:24:16.9099773Z parm: NVreg_EnableStreamMemOPs:int 2023-05-06T10:24:16.9100090Z parm: NVreg_RestrictProfilingToAdminUsers:int 2023-05-06T10:24:16.9100672Z parm: NVreg_PreserveVideoMemoryAllocations:int 2023-05-06T10:24:16.9100997Z parm: NVreg_EnableS0ixPowerManagement:int 2023-05-06T10:24:16.9101363Z parm: NVreg_S0ixPowerManagementVideoMemoryThreshold:int 2023-05-06T10:24:16.9101718Z parm: NVreg_DynamicPowerManagement:int 2023-05-06T10:24:16.9102082Z parm: NVreg_DynamicPowerManagementVideoMemoryThreshold:int 2023-05-06T10:24:16.9102448Z parm: NVreg_EnableGpuFirmware:int 2023-05-06T10:24:16.9102742Z parm: NVreg_EnableGpuFirmwareLogs:int 2023-05-06T10:24:16.9103049Z parm: NVreg_OpenRmEnableUnsupportedGpus:int 2023-05-06T10:24:16.9103377Z parm: NVreg_EnableUserNUMAManagement:int 2023-05-06T10:24:16.9103668Z parm: NVreg_MemoryPoolSize:int 2023-05-06T10:24:16.9103938Z parm: NVreg_KMallocHeapMaxSize:int 2023-05-06T10:24:16.9104225Z parm: NVreg_VMallocHeapMaxSize:int 2023-05-06T10:24:16.9104504Z parm: NVreg_IgnoreMMIOCheck:int 2023-05-06T10:24:16.9104775Z parm: NVreg_NvLinkDisable:int 2023-05-06T10:24:16.9105069Z parm: NVreg_EnablePCIERelaxedOrderingMode:int 2023-05-06T10:24:16.9105459Z parm: NVreg_RegisterPCIDriver:int 2023-05-06T10:24:16.9105752Z parm: NVreg_EnableDbgBreakpoint:int 2023-05-06T10:24:16.9106023Z parm: NVreg_RegistryDwords:charp 2023-05-06T10:24:16.9106331Z parm: NVreg_RegistryDwordsPerDevice:charp 2023-05-06T10:24:16.9106610Z parm: NVreg_RmMsg:charp 2023-05-06T10:24:16.9106852Z parm: NVreg_GpuBlacklist:charp 2023-05-06T10:24:16.9107132Z parm: NVreg_TemporaryFilePath:charp 2023-05-06T10:24:16.9107407Z parm: NVreg_ExcludedGpus:charp 2023-05-06T10:24:16.9107667Z parm: NVreg_DmaRemapPeerMmio:int 2023-05-06T10:24:16.9107934Z parm: rm_firmware_active:charp 2023-05-06T10:24:16.9108161Z + set +e 2023-05-06T10:24:16.9108421Z + nvidia-smi 2023-05-06T10:24:16.9269876Z Sat May 6 10:24:16 2023 2023-05-06T10:24:16.9270862Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:16.9271392Z | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | 2023-05-06T10:24:16.9271839Z |-------------------------------+----------------------+----------------------+ 2023-05-06T10:24:16.9272645Z | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | 2023-05-06T10:24:16.9273138Z | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | 2023-05-06T10:24:16.9273446Z | | | MIG M. | 2023-05-06T10:24:16.9273706Z |===============================+======================+======================| 2023-05-06T10:24:16.9369997Z | 0 NVIDIA A100-SXM... On | 00000000:00:04.0 Off | 0 | 2023-05-06T10:24:16.9370332Z | N/A 30C P0 43W / 400W | 0MiB / 40960MiB | 0% Default | 2023-05-06T10:24:16.9370635Z | | | Disabled | 2023-05-06T10:24:16.9371286Z +-------------------------------+----------------------+----------------------+ 2023-05-06T10:24:16.9371839Z 2023-05-06T10:24:16.9372470Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:16.9372814Z | Processes: | 2023-05-06T10:24:16.9373129Z | GPU GI CI PID Type Process name GPU Memory | 2023-05-06T10:24:16.9373429Z | ID ID Usage | 2023-05-06T10:24:16.9373681Z |=============================================================================| 2023-05-06T10:24:16.9374591Z | No running processes found | 2023-05-06T10:24:16.9375100Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:17.1984188Z + nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 2023-05-06T10:24:17.2144448Z NVIDIA A100-SXM4-40GB 2023-05-06T10:24:17.2181619Z + NVIDIA_SMI_STATUS=0 2023-05-06T10:24:17.2182171Z + '[' 0 -eq 0 ']' 2023-05-06T10:24:17.2182515Z + echo 'INFO: Ignoring allowed status 0' 2023-05-06T10:24:17.2182778Z + set -e 2023-05-06T10:24:17.2182989Z INFO: Ignoring allowed status 0 2023-05-06T10:24:17.2190586Z == Installing nvidia container toolkit for ubuntu20.04 == 2023-05-06T10:24:17.2194966Z + sudo apt-get install -y nvidia-docker2 2023-05-06T10:24:17.2891068Z Reading package lists... 2023-05-06T10:24:17.5235244Z Building dependency tree... 2023-05-06T10:24:17.5241828Z Reading state information... 2023-05-06T10:24:17.7162722Z nvidia-docker2 is already the newest version (2.13.0-1). 2023-05-06T10:24:17.7163163Z The following packages were automatically installed and are no longer required: 2023-05-06T10:24:17.7163689Z libatasmart4 libblockdev-fs2 libblockdev-loop2 libblockdev-part-err2 2023-05-06T10:24:17.7164232Z libblockdev-part2 libblockdev-swap2 libblockdev-utils2 libblockdev2 2023-05-06T10:24:17.7164740Z libexpat1-dev libmm-glib0 libnspr4 libnss3 libnuma1 libparted-fs-resize0 2023-05-06T10:24:17.7170740Z libpython3-dev libpython3.8-dev libudisks2-0 usb-modeswitch 2023-05-06T10:24:17.7171143Z usb-modeswitch-data 2023-05-06T10:24:17.7171497Z Use 'sudo apt autoremove' to remove them. 2023-05-06T10:24:17.8347078Z 0 upgraded, 0 newly installed, 0 to remove and 50 not upgraded. 2023-05-06T10:24:17.9164957Z + sudo systemctl restart docker 2023-05-06T10:24:28.8560772Z Command completed after 1 attempt(s). 2023-05-06T10:24:28.8642241Z ##[group]Run sudo nvidia-smi -pm 1 2023-05-06T10:24:28.8642531Z sudo nvidia-smi -pm 1 2023-05-06T10:24:28.8642792Z sudo nvidia-smi -ac 1215,1410 2023-05-06T10:24:28.8643032Z nvidia-smi 2023-05-06T10:24:28.8662091Z shell: /usr/bin/bash -e {0} 2023-05-06T10:24:28.8662311Z env: 2023-05-06T10:24:28.8662546Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:28.8662792Z GPU_FLAG: --gpus all 2023-05-06T10:24:28.8662998Z ##[endgroup] 2023-05-06T10:24:28.8984500Z Persistence mode is already Enabled for GPU 00000000:00:04.0. 2023-05-06T10:24:28.8984844Z All done. 2023-05-06T10:24:28.9365339Z Applications clocks set to "(MEM 1215, SM 1410)" for GPU 00000000:00:04.0 2023-05-06T10:24:28.9365860Z All done. 2023-05-06T10:24:28.9553700Z Sat May 6 10:24:28 2023 2023-05-06T10:24:28.9554789Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:28.9555676Z | NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 | 2023-05-06T10:24:28.9556145Z |-------------------------------+----------------------+----------------------+ 2023-05-06T10:24:28.9556838Z | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | 2023-05-06T10:24:28.9557507Z | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | 2023-05-06T10:24:28.9557836Z | | | MIG M. | 2023-05-06T10:24:28.9558100Z |===============================+======================+======================| 2023-05-06T10:24:28.9653227Z | 0 NVIDIA A100-SXM... On | 00000000:00:04.0 Off | 0 | 2023-05-06T10:24:28.9653711Z | N/A 30C P0 43W / 400W | 0MiB / 40960MiB | 0% Default | 2023-05-06T10:24:28.9653987Z | | | Disabled | 2023-05-06T10:24:28.9654514Z +-------------------------------+----------------------+----------------------+ 2023-05-06T10:24:28.9655262Z 2023-05-06T10:24:28.9655756Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:28.9656075Z | Processes: | 2023-05-06T10:24:28.9656379Z | GPU GI CI PID Type Process name GPU Memory | 2023-05-06T10:24:28.9657163Z | ID ID Usage | 2023-05-06T10:24:28.9657416Z |=============================================================================| 2023-05-06T10:24:28.9657708Z | No running processes found | 2023-05-06T10:24:28.9658159Z +-----------------------------------------------------------------------------+ 2023-05-06T10:24:29.2096574Z ##[group]Run python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 2023-05-06T10:24:29.2097034Z python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 2023-05-06T10:24:29.2097384Z python3 -m tools.stats.monitor > usage_log.txt 2>&1 & 2023-05-06T10:24:29.2097715Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2023-05-06T10:24:29.2116399Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:29.2116843Z env: 2023-05-06T10:24:29.2117079Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:29.2117301Z GPU_FLAG: --gpus all 2023-05-06T10:24:29.2117522Z ##[endgroup] 2023-05-06T10:24:30.3625942Z Requirement already satisfied: psutil==5.9.1 in /home/ubuntu/.local/lib/python3.8/site-packages (5.9.1) 2023-05-06T10:24:30.3711843Z Requirement already satisfied: nvidia-ml-py==11.525.84 in /home/ubuntu/.local/lib/python3.8/site-packages (11.525.84) 2023-05-06T10:24:30.5858324Z Prepare all required actions 2023-05-06T10:24:30.5858873Z Getting action download info 2023-05-06T10:24:30.8362353Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:4a8bfae15cc25cc0785c1603ee87a9da8fd442ea) 2023-05-06T10:24:31.3061248Z Download action repository 'actions/download-artifact@v3' (SHA:9bc31d5ccc31df68ecc42ccf4149144866c47d8a) 2023-05-06T10:24:31.7287657Z ##[group]Run ./.github/actions/download-build-artifacts 2023-05-06T10:24:31.7287933Z with: 2023-05-06T10:24:31.7288186Z name: linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:24:31.7288463Z env: 2023-05-06T10:24:31.7288671Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:31.7288887Z GPU_FLAG: --gpus all 2023-05-06T10:24:31.7289139Z ##[endgroup] 2023-05-06T10:24:31.7318935Z ##[group]Run seemethere/download-artifact-s3@v4 2023-05-06T10:24:31.7319207Z with: 2023-05-06T10:24:31.7319474Z name: linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:24:31.7319742Z s3-bucket: gha-artifacts 2023-05-06T10:24:31.7320034Z region: us-east-1 2023-05-06T10:24:31.7320224Z env: 2023-05-06T10:24:31.7320433Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:31.7320668Z GPU_FLAG: --gpus all 2023-05-06T10:24:31.7320869Z ##[endgroup] 2023-05-06T10:24:32.3189189Z Found 1 objects with prefix pytorch/pytorch/4900301301/linux-bionic-cuda11.8-py3.10-gcc7-sm80/ 2023-05-06T10:24:32.3189841Z Starting download (1/1): /home/weiwangmeta/actions-runner/_work/pytorch/pytorch/artifacts.zip 2023-05-06T10:24:40.6442829Z Finished download (1/1): /home/weiwangmeta/actions-runner/_work/pytorch/pytorch/artifacts.zip 2023-05-06T10:24:40.6443149Z 2023-05-06T10:24:40.6467001Z ##[warning]The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/ 2023-05-06T10:24:40.6477780Z Artifact download has finished successfully 2023-05-06T10:24:40.6629938Z ##[group]Run unzip -o artifacts.zip 2023-05-06T10:24:40.6630255Z unzip -o artifacts.zip 2023-05-06T10:24:40.6649393Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:40.6682185Z env: 2023-05-06T10:24:40.6682437Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:40.6682665Z GPU_FLAG: --gpus all 2023-05-06T10:24:40.6682884Z ##[endgroup] 2023-05-06T10:24:40.6738764Z Archive: artifacts.zip 2023-05-06T10:24:40.6741313Z creating: dist/ 2023-05-06T10:24:42.6806807Z inflating: dist/torch-2.1.0a0+gitd719f02-cp310-cp310-linux_x86_64.whl 2023-05-06T10:24:42.6807251Z creating: build/custom_test_artifacts/ 2023-05-06T10:24:42.6808141Z creating: build/custom_test_artifacts/custom-op-build/ 2023-05-06T10:24:42.6808583Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2023-05-06T10:24:42.6812082Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeOutput.log 2023-05-06T10:24:42.6812719Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/ 2023-05-06T10:24:42.6813260Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-05-06T10:24:42.6813783Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-05-06T10:24:42.6814353Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-05-06T10:24:42.6815261Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-05-06T10:24:42.6816536Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-05-06T10:24:42.6817134Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-05-06T10:24:42.6817675Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-05-06T10:24:42.6819334Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-05-06T10:24:42.6820963Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-05-06T10:24:42.6822089Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-05-06T10:24:42.6822922Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-05-06T10:24:42.6824031Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-05-06T10:24:42.6825134Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-05-06T10:24:42.6825683Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-05-06T10:24:42.6826222Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-05-06T10:24:42.6879729Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-05-06T10:24:42.6881067Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-05-06T10:24:42.6882352Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-05-06T10:24:42.6883440Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-05-06T10:24:42.6884160Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-05-06T10:24:42.6884840Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-05-06T10:24:42.6885505Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-05-06T10:24:42.6886161Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-05-06T10:24:42.6886839Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-05-06T10:24:42.6930031Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-05-06T10:24:42.6976967Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-05-06T10:24:42.6978177Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-05-06T10:24:42.6979586Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-05-06T10:24:42.6980622Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-05-06T10:24:42.6981558Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-05-06T10:24:42.6982333Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-05-06T10:24:42.6982920Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-05-06T10:24:42.6983536Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-05-06T10:24:42.7053626Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-05-06T10:24:42.7125931Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-05-06T10:24:42.7126914Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-05-06T10:24:42.7127887Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2023-05-06T10:24:42.7129157Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeError.log 2023-05-06T10:24:42.7130070Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2023-05-06T10:24:42.7130902Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2023-05-06T10:24:42.7131803Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2023-05-06T10:24:42.7132900Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2023-05-06T10:24:42.7133827Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2023-05-06T10:24:42.7134385Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2023-05-06T10:24:42.7134948Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2023-05-06T10:24:42.7135498Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2023-05-06T10:24:42.7136070Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2023-05-06T10:24:42.7136631Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2023-05-06T10:24:42.7137183Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2023-05-06T10:24:42.7157264Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2023-05-06T10:24:42.7273039Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2023-05-06T10:24:42.7274031Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2023-05-06T10:24:42.7274981Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2023-05-06T10:24:42.7275892Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2023-05-06T10:24:42.7277112Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2023-05-06T10:24:42.7277880Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2023-05-06T10:24:42.7278460Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2023-05-06T10:24:42.7279021Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2023-05-06T10:24:42.7279872Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2023-05-06T10:24:42.7280449Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2023-05-06T10:24:42.7281085Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2023-05-06T10:24:42.7300599Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2023-05-06T10:24:42.7387210Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2023-05-06T10:24:42.7388192Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-05-06T10:24:42.7389117Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2023-05-06T10:24:42.7390035Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2023-05-06T10:24:42.7390631Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2023-05-06T10:24:42.7391136Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2023-05-06T10:24:42.7391640Z inflating: build/custom_test_artifacts/custom-op-build/detect_cuda_version.cc 2023-05-06T10:24:42.7393207Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2023-05-06T10:24:42.7394486Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2023-05-06T10:24:42.7395028Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2023-05-06T10:24:42.7488644Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2023-05-06T10:24:42.7552784Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2023-05-06T10:24:42.7553248Z creating: build/custom_test_artifacts/jit-hook-build/ 2023-05-06T10:24:42.7553681Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2023-05-06T10:24:42.7560584Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeOutput.log 2023-05-06T10:24:42.7561383Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/ 2023-05-06T10:24:42.7561894Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-05-06T10:24:42.7562433Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-05-06T10:24:42.7563014Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-05-06T10:24:42.7563948Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-05-06T10:24:42.7565055Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-05-06T10:24:42.7565843Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-05-06T10:24:42.7566402Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-05-06T10:24:42.7568003Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-05-06T10:24:42.7569072Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-05-06T10:24:42.7570356Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-05-06T10:24:42.7571241Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-05-06T10:24:42.7572332Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-05-06T10:24:42.7573442Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-05-06T10:24:42.7574034Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-05-06T10:24:42.7574868Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-05-06T10:24:42.7628095Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-05-06T10:24:42.7629375Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-05-06T10:24:42.7630704Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-05-06T10:24:42.7631758Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-05-06T10:24:42.7632460Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-05-06T10:24:42.7633115Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-05-06T10:24:42.7633773Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-05-06T10:24:42.7634424Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-05-06T10:24:42.7635311Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-05-06T10:24:42.7678303Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-05-06T10:24:42.7725123Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-05-06T10:24:42.7726318Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-05-06T10:24:42.7727385Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-05-06T10:24:42.7728539Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-05-06T10:24:42.7729230Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-05-06T10:24:42.7730245Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-05-06T10:24:42.7730951Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-05-06T10:24:42.7731563Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-05-06T10:24:42.7801884Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-05-06T10:24:42.7874988Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-05-06T10:24:42.7876145Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-05-06T10:24:42.7877901Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2023-05-06T10:24:42.7878816Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeError.log 2023-05-06T10:24:42.7879752Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2023-05-06T10:24:42.7880667Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2023-05-06T10:24:42.7882514Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2023-05-06T10:24:42.7883136Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2023-05-06T10:24:42.7883710Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2023-05-06T10:24:42.7884268Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2023-05-06T10:24:42.7885282Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2023-05-06T10:24:42.7885855Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2023-05-06T10:24:42.7886444Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2023-05-06T10:24:42.7887011Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2023-05-06T10:24:42.7887559Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2023-05-06T10:24:42.7905924Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2023-05-06T10:24:42.7978669Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2023-05-06T10:24:42.7979554Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-05-06T10:24:42.7980789Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2023-05-06T10:24:42.7981611Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2023-05-06T10:24:42.7982417Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2023-05-06T10:24:42.7982932Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2023-05-06T10:24:42.7983631Z inflating: build/custom_test_artifacts/jit-hook-build/detect_cuda_version.cc 2023-05-06T10:24:42.7984427Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2023-05-06T10:24:42.7985191Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2023-05-06T10:24:42.7985694Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2023-05-06T10:24:42.8041431Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2023-05-06T10:24:42.8042080Z creating: build/custom_test_artifacts/custom-backend-build/ 2023-05-06T10:24:42.8042556Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2023-05-06T10:24:42.8048441Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeOutput.log 2023-05-06T10:24:42.8049336Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/ 2023-05-06T10:24:42.8049897Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeSystem.cmake 2023-05-06T10:24:42.8050513Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/ 2023-05-06T10:24:42.8051178Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/tmp/ 2023-05-06T10:24:42.8051954Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/CMakeCCompilerId.c 2023-05-06T10:24:42.8053065Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdC/a.out 2023-05-06T10:24:42.8053786Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/ 2023-05-06T10:24:42.8054331Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/tmp/ 2023-05-06T10:24:42.8056359Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/CMakeCXXCompilerId.cpp 2023-05-06T10:24:42.8057309Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCXX/a.out 2023-05-06T10:24:42.8058528Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_C.bin 2023-05-06T10:24:42.8059403Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCCompiler.cmake 2023-05-06T10:24:42.8060660Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CXX.bin 2023-05-06T10:24:42.8062042Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCXXCompiler.cmake 2023-05-06T10:24:42.8062623Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/ 2023-05-06T10:24:42.8063205Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/ 2023-05-06T10:24:42.8117386Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2023-05-06T10:24:42.8118750Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2023-05-06T10:24:42.8119956Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2023-05-06T10:24:42.8120958Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2023-05-06T10:24:42.8121693Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2023-05-06T10:24:42.8122393Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2023-05-06T10:24:42.8123349Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2023-05-06T10:24:42.8124050Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2023-05-06T10:24:42.8124724Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2023-05-06T10:24:42.8167820Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2023-05-06T10:24:42.8214721Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2023-05-06T10:24:42.8216050Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2023-05-06T10:24:42.8217323Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2023-05-06T10:24:42.8218299Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.reg.c 2023-05-06T10:24:42.8219209Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin 2023-05-06T10:24:42.8220078Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2023-05-06T10:24:42.8220770Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/tmp/a_dlink.o 2023-05-06T10:24:42.8221405Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/CMakeCUDACompilerId.cu 2023-05-06T10:24:42.8291691Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CompilerIdCUDA/a.out 2023-05-06T10:24:42.8364009Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeDetermineCompilerABI_CUDA.bin 2023-05-06T10:24:42.8365179Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.22.1/CMakeCUDACompiler.cmake 2023-05-06T10:24:42.8366180Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2023-05-06T10:24:42.8367166Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeError.log 2023-05-06T10:24:42.8368011Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2023-05-06T10:24:42.8368960Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2023-05-06T10:24:42.8370282Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2023-05-06T10:24:42.8370986Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2023-05-06T10:24:42.8371606Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2023-05-06T10:24:42.8372185Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2023-05-06T10:24:42.8372786Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2023-05-06T10:24:42.8373394Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2023-05-06T10:24:42.8374151Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2023-05-06T10:24:42.8374960Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2023-05-06T10:24:42.8375572Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2023-05-06T10:24:42.8376204Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2023-05-06T10:24:42.8524469Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2023-05-06T10:24:42.8525593Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2023-05-06T10:24:42.8526428Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2023-05-06T10:24:42.8527607Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2023-05-06T10:24:42.8528355Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2023-05-06T10:24:42.8528952Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2023-05-06T10:24:42.8529578Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2023-05-06T10:24:42.8530194Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2023-05-06T10:24:42.8530868Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2023-05-06T10:24:42.8531463Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2023-05-06T10:24:42.8532070Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2023-05-06T10:24:42.8550869Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2023-05-06T10:24:42.8613470Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2023-05-06T10:24:42.8614412Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2023-05-06T10:24:42.8615284Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2023-05-06T10:24:42.8616094Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2023-05-06T10:24:42.8616644Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2023-05-06T10:24:42.8617154Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2023-05-06T10:24:42.8617680Z inflating: build/custom_test_artifacts/custom-backend-build/detect_cuda_version.cc 2023-05-06T10:24:42.8618762Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2023-05-06T10:24:42.8619981Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2023-05-06T10:24:42.8620546Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2023-05-06T10:24:42.8738567Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2023-05-06T10:24:42.8786073Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2023-05-06T10:24:42.8786584Z creating: build/lib/ 2023-05-06T10:24:42.8786824Z inflating: build/lib/libclog.a 2023-05-06T10:24:42.8796398Z inflating: build/lib/libpthreadpool.a 2023-05-06T10:24:42.8863590Z inflating: build/lib/libgtest.a 2023-05-06T10:24:42.8967020Z inflating: build/lib/libprotobuf-lite.a 2023-05-06T10:24:42.8975308Z inflating: build/lib/libittnotify.a 2023-05-06T10:24:42.9070132Z inflating: build/lib/libbenchmark.a 2023-05-06T10:24:42.9586845Z inflating: build/lib/libprotobuf.a 2023-05-06T10:24:42.9618895Z inflating: build/lib/libtensorpipe_uv.a 2023-05-06T10:24:42.9694401Z inflating: build/lib/libasmjit.a 2023-05-06T10:24:42.9831972Z inflating: build/lib/libgloo.a 2023-05-06T10:24:42.9853345Z inflating: build/lib/libfmt.a 2023-05-06T10:24:42.9853793Z inflating: build/lib/libfoxi_loader.a 2023-05-06T10:24:42.9855538Z inflating: build/lib/libcaffe2_nvrtc.so 2023-05-06T10:24:42.9942979Z inflating: build/lib/libc10.so 2023-05-06T10:24:42.9943848Z inflating: build/lib/libtorch_global_deps.so 2023-05-06T10:24:42.9953140Z inflating: build/lib/libcpuinfo.a 2023-05-06T10:24:42.9961848Z inflating: build/lib/libcpuinfo_internals.a 2023-05-06T10:24:42.9976833Z inflating: build/lib/libqnnpack.a 2023-05-06T10:24:43.0528534Z inflating: build/lib/libprotoc.a 2023-05-06T10:24:43.0549830Z inflating: build/lib/libpytorch_qnnpack.a 2023-05-06T10:24:43.0551946Z inflating: build/lib/libnnpack_reference_layers.a 2023-05-06T10:24:43.0573484Z inflating: build/lib/libnnpack.a 2023-05-06T10:24:43.0574241Z inflating: build/lib/libgtest_main.a 2023-05-06T10:24:43.0591564Z inflating: build/lib/libgmock.a 2023-05-06T10:24:43.0592025Z inflating: build/lib/libbenchmark_main.a 2023-05-06T10:24:43.1226742Z inflating: build/lib/libtensorpipe.a 2023-05-06T10:24:44.0772020Z inflating: build/lib/libdnnl.a 2023-05-06T10:24:44.0832507Z inflating: build/lib/libc10_cuda.so 2023-05-06T10:24:44.0832880Z inflating: build/lib/libgmock_main.a 2023-05-06T10:24:44.2349476Z inflating: build/lib/libfbgemm.a 2023-05-06T10:24:44.2629455Z inflating: build/lib/libtensorpipe_cuda.a 2023-05-06T10:24:44.3737694Z inflating: build/lib/libdnnl_graph.a 2023-05-06T10:24:44.4257099Z inflating: build/lib/libkineto.a 2023-05-06T10:24:44.4300722Z inflating: build/lib/libcaffe2_protos.a 2023-05-06T10:24:44.4439732Z inflating: build/lib/libXNNPACK.a 2023-05-06T10:24:44.4486321Z inflating: build/lib/libonnx_proto.a 2023-05-06T10:24:44.5201664Z inflating: build/lib/libonnx.a 2023-05-06T10:24:44.5621588Z inflating: build/lib/libgloo_cuda.a 2023-05-06T10:24:46.9185057Z inflating: build/lib/libtorch_cpu.so 2023-05-06T10:24:46.9200278Z inflating: build/lib/libunbox_lib.a 2023-05-06T10:24:48.7053572Z inflating: build/lib/libtorch_cuda.so 2023-05-06T10:24:48.7053958Z inflating: build/lib/libtorch.so 2023-05-06T10:24:48.7055860Z inflating: build/lib/libc10d_cuda_test.so 2023-05-06T10:24:49.8125675Z inflating: build/lib/libtorch_cuda_linalg.so 2023-05-06T10:24:49.8147968Z inflating: build/lib/libjitbackend_test.so 2023-05-06T10:24:49.8207229Z inflating: build/lib/libtorchbind_test.so 2023-05-06T10:24:49.8237329Z inflating: build/lib/libbackend_with_compiler.so 2023-05-06T10:24:49.8241207Z inflating: build/lib/libshm.so 2023-05-06T10:24:49.8944087Z inflating: build/lib/libnvfuser_codegen.so 2023-05-06T10:24:50.0700338Z inflating: build/lib/libtorch_python.so 2023-05-06T10:24:50.0737872Z inflating: build/lib/libnnapi_backend.so 2023-05-06T10:24:50.0738664Z creating: build/bin/ 2023-05-06T10:24:50.0790652Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2023-05-06T10:24:50.0845761Z inflating: build/bin/c10_DeviceGuard_test 2023-05-06T10:24:50.0899620Z inflating: build/bin/c10_Device_test 2023-05-06T10:24:50.0950695Z inflating: build/bin/c10_StreamGuard_test 2023-05-06T10:24:50.1013304Z inflating: build/bin/c10_DispatchKeySet_test 2023-05-06T10:24:50.1065515Z inflating: build/bin/c10_SymInt_test 2023-05-06T10:24:50.1125007Z inflating: build/bin/c10_InlineDeviceGuard_test 2023-05-06T10:24:50.1184507Z inflating: build/bin/c10_InlineStreamGuard_test 2023-05-06T10:24:50.1244933Z inflating: build/bin/c10_SizesAndStrides_test 2023-05-06T10:24:50.1295566Z inflating: build/bin/c10_Array_test 2023-05-06T10:24:50.1352411Z inflating: build/bin/c10_Bitset_test 2023-05-06T10:24:50.1406713Z inflating: build/bin/c10_C++17_test 2023-05-06T10:24:50.1457691Z inflating: build/bin/c10_ConstexprCrc_test 2023-05-06T10:24:50.1509703Z inflating: build/bin/c10_DeadlockDetection_test 2023-05-06T10:24:50.1562670Z inflating: build/bin/c10_Half_test 2023-05-06T10:24:50.1622973Z inflating: build/bin/c10_LeftRight_test 2023-05-06T10:24:50.1684220Z inflating: build/bin/c10_Metaprogramming_test 2023-05-06T10:24:50.1737376Z inflating: build/bin/c10_Synchronized_test 2023-05-06T10:24:50.1798035Z inflating: build/bin/c10_ThreadLocal_test 2023-05-06T10:24:50.1854350Z inflating: build/bin/c10_TypeIndex_test 2023-05-06T10:24:50.1908035Z inflating: build/bin/c10_TypeList_test 2023-05-06T10:24:50.1959125Z inflating: build/bin/c10_TypeTraits_test 2023-05-06T10:24:50.2014523Z inflating: build/bin/c10_accumulate_test 2023-05-06T10:24:50.2074247Z inflating: build/bin/c10_bfloat16_test 2023-05-06T10:24:50.2127354Z inflating: build/bin/c10_bit_cast_test 2023-05-06T10:24:50.2187294Z inflating: build/bin/c10_complex_math_test 2023-05-06T10:24:50.2245372Z inflating: build/bin/c10_complex_test 2023-05-06T10:24:50.2362814Z inflating: build/bin/c10_either_test 2023-05-06T10:24:50.2419110Z inflating: build/bin/c10_exception_test 2023-05-06T10:24:50.2471538Z inflating: build/bin/c10_flags_test 2023-05-06T10:24:50.2524974Z inflating: build/bin/c10_irange_test 2023-05-06T10:24:50.2703415Z inflating: build/bin/c10_intrusive_ptr_test 2023-05-06T10:24:50.2763830Z inflating: build/bin/c10_logging_test 2023-05-06T10:24:50.2843396Z inflating: build/bin/c10_optional_test 2023-05-06T10:24:50.2909421Z inflating: build/bin/c10_ordered_preserving_dict_test 2023-05-06T10:24:50.2968210Z inflating: build/bin/c10_registry_test 2023-05-06T10:24:50.3123750Z inflating: build/bin/c10_small_vector_test 2023-05-06T10:24:50.3178402Z inflating: build/bin/c10_ssize_test 2023-05-06T10:24:50.3326982Z inflating: build/bin/c10_string_view_test 2023-05-06T10:24:50.3383650Z inflating: build/bin/c10_tempfile_test 2023-05-06T10:24:50.3443016Z inflating: build/bin/c10_typeid_test 2023-05-06T10:24:50.3501066Z inflating: build/bin/c10_intrusive_ptr_benchmark 2023-05-06T10:24:50.4006474Z inflating: build/bin/protoc-3.13.0.0 2023-05-06T10:24:50.4509168Z inflating: build/bin/protoc 2023-05-06T10:24:50.4566096Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_1_var_test 2023-05-06T10:24:50.4623703Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_stream 2023-05-06T10:24:50.4680203Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device 2023-05-06T10:24:50.4735646Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_from_2_processes 2023-05-06T10:24:50.4792480Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads 2023-05-06T10:24:50.4844044Z inflating: build/bin/c10_cuda_CUDATest 2023-05-06T10:24:50.4900619Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block 2023-05-06T10:24:50.4957220Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks 2023-05-06T10:24:50.5278481Z inflating: build/bin/vec_test_all_types_DEFAULT 2023-05-06T10:24:50.5631756Z inflating: build/bin/vec_test_all_types_AVX2 2023-05-06T10:24:50.5694648Z inflating: build/bin/TCPStoreTest 2023-05-06T10:24:50.5751866Z inflating: build/bin/HashStoreTest 2023-05-06T10:24:50.5809208Z inflating: build/bin/FileStoreTest 2023-05-06T10:24:50.5824502Z inflating: build/bin/ProcessGroupMPITest 2023-05-06T10:24:50.5889764Z inflating: build/bin/test_edge_op_registration 2023-05-06T10:24:50.5892711Z inflating: build/bin/example_allreduce 2023-05-06T10:24:50.5949061Z inflating: build/bin/Dimname_test 2023-05-06T10:24:50.6026558Z inflating: build/bin/Dict_test 2023-05-06T10:24:50.6094425Z inflating: build/bin/MaybeOwned_test 2023-05-06T10:24:50.6155064Z inflating: build/bin/NamedTensor_test 2023-05-06T10:24:50.6218192Z inflating: build/bin/apply_utils_test 2023-05-06T10:24:50.6280761Z inflating: build/bin/atest 2023-05-06T10:24:50.6345571Z inflating: build/bin/basic 2023-05-06T10:24:50.6403262Z inflating: build/bin/broadcast_test 2023-05-06T10:24:50.6464737Z inflating: build/bin/cpu_generator_test 2023-05-06T10:24:50.6520790Z inflating: build/bin/cpu_profiling_allocator_test 2023-05-06T10:24:50.6573778Z inflating: build/bin/dispatch_key_set_test 2023-05-06T10:24:50.6667742Z inflating: build/bin/cpu_rng_test 2023-05-06T10:24:50.6720110Z inflating: build/bin/dlconvertor_test 2023-05-06T10:24:50.6782524Z inflating: build/bin/extension_backend_test 2023-05-06T10:24:50.6841218Z inflating: build/bin/half_test 2023-05-06T10:24:50.6893483Z inflating: build/bin/lazy_tensor_test 2023-05-06T10:24:50.6993611Z inflating: build/bin/ivalue_test 2023-05-06T10:24:50.7051101Z inflating: build/bin/math_kernel_test 2023-05-06T10:24:50.7108540Z inflating: build/bin/memory_format_test 2023-05-06T10:24:50.7165606Z inflating: build/bin/memory_overlapping_test 2023-05-06T10:24:50.7219727Z inflating: build/bin/operator_name_test 2023-05-06T10:24:50.7275410Z inflating: build/bin/mobile_memory_cleanup 2023-05-06T10:24:50.7336797Z inflating: build/bin/native_test 2023-05-06T10:24:50.7390226Z inflating: build/bin/operators_test 2023-05-06T10:24:50.7446778Z inflating: build/bin/packedtensoraccessor_test 2023-05-06T10:24:50.7508738Z inflating: build/bin/quantized_test 2023-05-06T10:24:50.7578136Z inflating: build/bin/pow_test 2023-05-06T10:24:50.7637937Z inflating: build/bin/scalar_tensor_test 2023-05-06T10:24:50.7692526Z inflating: build/bin/reportMemoryUsage_test 2023-05-06T10:24:50.7745233Z inflating: build/bin/reduce_ops_test 2023-05-06T10:24:50.7805413Z inflating: build/bin/scalar_test 2023-05-06T10:24:50.7861193Z inflating: build/bin/StorageUtils_test 2023-05-06T10:24:50.7917069Z inflating: build/bin/stride_properties_test 2023-05-06T10:24:50.7976744Z inflating: build/bin/type_ptr_test 2023-05-06T10:24:50.8059665Z inflating: build/bin/tensor_iterator_test 2023-05-06T10:24:50.8062419Z inflating: build/bin/thread_init_test 2023-05-06T10:24:50.8121923Z inflating: build/bin/test_parallel 2023-05-06T10:24:50.8174287Z inflating: build/bin/variant_test 2023-05-06T10:24:50.8238284Z inflating: build/bin/type_test 2023-05-06T10:24:50.8294866Z inflating: build/bin/undefined_tensor_test 2023-05-06T10:24:50.8295537Z inflating: build/bin/verify_api_visibility 2023-05-06T10:24:50.8369255Z inflating: build/bin/legacy_vmap_test 2023-05-06T10:24:50.8423804Z inflating: build/bin/weakref_test 2023-05-06T10:24:50.8477312Z inflating: build/bin/wrapdim_test 2023-05-06T10:24:50.8592801Z inflating: build/bin/List_test 2023-05-06T10:24:50.8656250Z inflating: build/bin/IListRef_test 2023-05-06T10:24:50.8707975Z inflating: build/bin/xla_tensor_test 2023-05-06T10:24:50.8836364Z inflating: build/bin/kernel_function_legacy_test 2023-05-06T10:24:50.8905311Z inflating: build/bin/KernelFunction_test 2023-05-06T10:24:50.9006929Z inflating: build/bin/kernel_function_test 2023-05-06T10:24:50.9144077Z inflating: build/bin/kernel_lambda_legacy_test 2023-05-06T10:24:50.9252944Z inflating: build/bin/kernel_lambda_test 2023-05-06T10:24:50.9317103Z inflating: build/bin/kernel_stackbased_test 2023-05-06T10:24:50.9371336Z inflating: build/bin/CppSignature_test 2023-05-06T10:24:50.9472382Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2023-05-06T10:24:50.9523401Z inflating: build/bin/op_allowlist_test 2023-05-06T10:24:50.9581935Z inflating: build/bin/inline_container_test 2023-05-06T10:24:50.9642645Z inflating: build/bin/backend_fallback_test 2023-05-06T10:24:50.9948619Z inflating: build/bin/op_registration_test 2023-05-06T10:24:51.0005047Z inflating: build/bin/cuda_apply_test 2023-05-06T10:24:51.0068667Z inflating: build/bin/cuda_atomic_ops_test 2023-05-06T10:24:51.0126394Z inflating: build/bin/cuda_caching_host_allocator_test 2023-05-06T10:24:51.0178529Z inflating: build/bin/cuda_device_test 2023-05-06T10:24:51.0254951Z inflating: build/bin/cuda_complex_math_test 2023-05-06T10:24:51.0317590Z inflating: build/bin/cuda_complex_test 2023-05-06T10:24:51.0380366Z inflating: build/bin/cuda_cub_test 2023-05-06T10:24:51.0433426Z inflating: build/bin/cuda_dlconvertor_test 2023-05-06T10:24:51.0487992Z inflating: build/bin/cuda_integer_divider_test 2023-05-06T10:24:51.0558126Z inflating: build/bin/cuda_distributions_test 2023-05-06T10:24:51.0620471Z inflating: build/bin/cuda_generator_test 2023-05-06T10:24:51.0672473Z inflating: build/bin/cuda_half_test 2023-05-06T10:24:51.0724629Z inflating: build/bin/cuda_optional_test 2023-05-06T10:24:51.0788635Z inflating: build/bin/cuda_stream_test 2023-05-06T10:24:51.0846484Z inflating: build/bin/cuda_reportMemoryUsage_test 2023-05-06T10:24:51.0901246Z inflating: build/bin/cuda_packedtensoraccessor_test 2023-05-06T10:24:51.0952704Z inflating: build/bin/cuda_cudnn_test 2023-05-06T10:24:51.1008732Z inflating: build/bin/cuda_vectorized_test 2023-05-06T10:24:51.1025317Z inflating: build/bin/tutorial_tensorexpr 2023-05-06T10:24:51.1095173Z inflating: build/bin/ProcessGroupGlooTest 2023-05-06T10:24:51.1157115Z inflating: build/bin/ProcessGroupGlooAsyncTest 2023-05-06T10:24:51.1218618Z inflating: build/bin/ProcessGroupNCCLErrorsTest 2023-05-06T10:24:51.1283484Z inflating: build/bin/ProcessGroupNCCLTest 2023-05-06T10:24:51.1340593Z inflating: build/bin/test_dist_autograd 2023-05-06T10:24:51.1414291Z inflating: build/bin/test_cpp_rpc 2023-05-06T10:24:51.1416782Z inflating: build/bin/parallel_benchmark 2023-05-06T10:24:51.1489051Z inflating: build/bin/test_mobile_nnc 2023-05-06T10:24:51.1499224Z inflating: build/bin/aot_model_compiler_test 2023-05-06T10:24:51.2381096Z inflating: build/bin/test_tensorexpr 2023-05-06T10:24:51.2756890Z inflating: build/bin/test_lazy 2023-05-06T10:24:51.2761962Z inflating: build/bin/torch_shm_manager 2023-05-06T10:24:51.4070671Z inflating: build/bin/test_api 2023-05-06T10:24:51.4679764Z inflating: build/bin/test_jit 2023-05-06T10:24:51.5297075Z inflating: build/bin/nvfuser_tests 2023-05-06T10:24:51.5333803Z ##[group]Run df -H 2023-05-06T10:24:51.5334036Z df -H 2023-05-06T10:24:51.5352947Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:51.5353219Z env: 2023-05-06T10:24:51.5353418Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:51.5353653Z GPU_FLAG: --gpus all 2023-05-06T10:24:51.5353874Z ##[endgroup] 2023-05-06T10:24:51.5408763Z Filesystem Size Used Avail Use% Mounted on 2023-05-06T10:24:51.5409059Z /dev/root 1.1T 224G 843G 21% / 2023-05-06T10:24:51.5409426Z devtmpfs 45G 0 45G 0% /dev 2023-05-06T10:24:51.5409737Z tmpfs 45G 0 45G 0% /dev/shm 2023-05-06T10:24:51.5409988Z tmpfs 9.0G 1.1M 9.0G 1% /run 2023-05-06T10:24:51.5410245Z tmpfs 5.3M 0 5.3M 0% /run/lock 2023-05-06T10:24:51.5410790Z tmpfs 45G 0 45G 0% /sys/fs/cgroup 2023-05-06T10:24:51.5411593Z /dev/loop0 16M 16M 0 100% /snap/aws-cli/130 2023-05-06T10:24:51.5411868Z /dev/loop1 123M 123M 0 100% /snap/core/14784 2023-05-06T10:24:51.5412149Z /dev/loop2 123M 123M 0 100% /snap/core/14946 2023-05-06T10:24:51.5412439Z /dev/loop3 59M 59M 0 100% /snap/core18/2714 2023-05-06T10:24:51.5412713Z /dev/loop4 59M 59M 0 100% /snap/core18/2721 2023-05-06T10:24:51.5412969Z /dev/loop5 67M 67M 0 100% /snap/core20/1852 2023-05-06T10:24:51.5413239Z /dev/loop10 97M 97M 0 100% /snap/lxd/24061 2023-05-06T10:24:51.5413511Z /dev/loop8 97M 97M 0 100% /snap/lxd/23991 2023-05-06T10:24:51.5413767Z /dev/loop12 53M 53M 0 100% /snap/snapd/18596 2023-05-06T10:24:51.5414031Z /dev/sda15 110M 5.5M 104M 5% /boot/efi 2023-05-06T10:24:51.5414297Z /dev/loop7 56M 56M 0 100% /snap/snapd/18933 2023-05-06T10:24:51.5414683Z /dev/loop11 352M 352M 0 100% /snap/google-cloud-sdk/338 2023-05-06T10:24:51.5415088Z /dev/loop9 352M 352M 0 100% /snap/google-cloud-sdk/340 2023-05-06T10:24:51.5415381Z /dev/loop13 67M 67M 0 100% /snap/core20/1879 2023-05-06T10:24:51.5438449Z ##[group]Run .github/scripts/parse_ref.py 2023-05-06T10:24:51.5438760Z .github/scripts/parse_ref.py 2023-05-06T10:24:51.5455114Z shell: /usr/bin/bash -e {0} 2023-05-06T10:24:51.5455336Z env: 2023-05-06T10:24:51.5455543Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:51.5455766Z GPU_FLAG: --gpus all 2023-05-06T10:24:51.5455986Z ##[endgroup] 2023-05-06T10:24:51.5781631Z Prepare all required actions 2023-05-06T10:24:51.5782338Z Getting action download info 2023-05-06T10:24:51.7358187Z ##[group]Run ./.github/actions/filter-test-configs 2023-05-06T10:24:51.7358456Z with: 2023-05-06T10:24:51.7359176Z github-token: *** 2023-05-06T10:24:51.7360620Z test-matrix: {"include": [{"config": "inductor_huggingface_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 1, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 2, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 3, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 4, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 5, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 6, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}]} 2023-05-06T10:24:51.7362165Z env: 2023-05-06T10:24:51.7362370Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:51.7362586Z GPU_FLAG: --gpus all 2023-05-06T10:24:51.7362797Z ##[endgroup] 2023-05-06T10:24:51.7394950Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2023-05-06T10:24:51.7395294Z with: 2023-05-06T10:24:51.7395473Z shell: bash 2023-05-06T10:24:51.7395684Z timeout_minutes: 10 2023-05-06T10:24:51.7395899Z max_attempts: 5 2023-05-06T10:24:51.7396113Z retry_wait_seconds: 30 2023-05-06T10:24:51.7396416Z command: set -eux python3 -m pip install requests==2.26.0 pyyaml==6.0 2023-05-06T10:24:51.7396907Z polling_interval_seconds: 1 2023-05-06T10:24:51.7397132Z warning_on_retry: true 2023-05-06T10:24:51.7397359Z continue_on_error: false 2023-05-06T10:24:51.7397571Z env: 2023-05-06T10:24:51.7397760Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:51.7398214Z GPU_FLAG: --gpus all 2023-05-06T10:24:51.7398605Z GITHUB_TOKEN: *** 2023-05-06T10:24:51.7398817Z ##[endgroup] 2023-05-06T10:24:51.8034063Z + python3 -m pip install requests==2.26.0 pyyaml==6.0 2023-05-06T10:24:52.9188311Z Requirement already satisfied: requests==2.26.0 in /home/ubuntu/.local/lib/python3.8/site-packages (2.26.0) 2023-05-06T10:24:52.9305794Z Requirement already satisfied: pyyaml==6.0 in /home/ubuntu/.local/lib/python3.8/site-packages (6.0) 2023-05-06T10:24:52.9320713Z Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests==2.26.0) (2019.11.28) 2023-05-06T10:24:52.9332805Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests==2.26.0) (1.25.8) 2023-05-06T10:24:52.9394363Z Requirement already satisfied: idna<4,>=2.5; python_version >= "3" in /usr/lib/python3/dist-packages (from requests==2.26.0) (2.8) 2023-05-06T10:24:52.9416625Z Requirement already satisfied: charset-normalizer~=2.0.0; python_version >= "3" in /home/ubuntu/.local/lib/python3.8/site-packages (from requests==2.26.0) (2.0.12) 2023-05-06T10:24:53.8024282Z Command completed after 1 attempt(s). 2023-05-06T10:24:53.8072913Z ##[group]Run .github/scripts/parse_ref.py 2023-05-06T10:24:53.8073211Z .github/scripts/parse_ref.py 2023-05-06T10:24:53.8091631Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:53.8091897Z env: 2023-05-06T10:24:53.8092109Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:53.8092330Z GPU_FLAG: --gpus all 2023-05-06T10:24:53.8092551Z ##[endgroup] 2023-05-06T10:24:53.8387896Z ##[group]Run set -x 2023-05-06T10:24:53.8388367Z set -x 2023-05-06T10:24:53.8388572Z  2023-05-06T10:24:53.8388865Z # TODO: This is a very hacky way to get the job name. GitHub runner has the info 2023-05-06T10:24:53.8389240Z # but doesn't expose it in anyway. The job name is part of the job message the 2023-05-06T10:24:53.8389628Z # runner receives, so it's there and printed out to the diag log. Below is the 2023-05-06T10:24:53.8390035Z # code responsible for printing it. Need to check with GitHub to see if they can 2023-05-06T10:24:53.8390380Z # expose this variable as part of GitHub context. 2023-05-06T10:24:53.8390776Z # https://github.com/actions/runner/blob/main/src/Runner.Worker/JobExtension.cs#L345 2023-05-06T10:24:53.8391183Z pushd "/home/weiwangmeta/actions-runner/_work/pytorch/../../_diag" 2023-05-06T10:24:53.8391475Z pwd 2023-05-06T10:24:53.8391651Z  2023-05-06T10:24:53.8391956Z LOG_FILE=$(grep -l -r "d719f0276d69a8315b65f4c4500cfc1cdaddb025" *.log | tail -n 1) 2023-05-06T10:24:53.8392279Z if [ -n "${LOG_FILE}" ]; then 2023-05-06T10:24:53.8392619Z  JOB_NAME=$(grep -r "\"jobDisplayName\"" "${LOG_FILE}" | awk -F '[:,]' '{print $2}' | sed 's/"//g' | xargs) 2023-05-06T10:24:53.8392996Z  echo "job-name=${JOB_NAME}" >> "${GITHUB_OUTPUT}" 2023-05-06T10:24:53.8393252Z fi 2023-05-06T10:24:53.8393442Z  2023-05-06T10:24:53.8393621Z popd 2023-05-06T10:24:53.8412224Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:53.8412501Z env: 2023-05-06T10:24:53.8412704Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:53.8412940Z GPU_FLAG: --gpus all 2023-05-06T10:24:53.8413158Z ##[endgroup] 2023-05-06T10:24:53.8451132Z + pushd /home/weiwangmeta/actions-runner/_work/pytorch/../../_diag 2023-05-06T10:24:53.8451810Z /home/weiwangmeta/actions-runner/_diag /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:24:53.8452258Z /home/weiwangmeta/actions-runner/_diag 2023-05-06T10:24:53.8452495Z + pwd 2023-05-06T10:24:53.8458967Z ++ tail -n 1 2023-05-06T10:24:53.8474224Z ++ grep -l -r d719f0276d69a8315b65f4c4500cfc1cdaddb025 Runner_20230404-153039-utc.log Runner_20230404-184000-utc.log Runner_20230405-144853-utc.log Runner_20230406-025157-utc.log Runner_20230406-144353-utc.log Runner_20230406-234514-utc.log Runner_20230407-134051-utc.log Runner_20230408-015551-utc.log Runner_20230408-023757-utc.log Runner_20230408-142501-utc.log Runner_20230410-150121-utc.log Runner_20230411-025314-utc.log Runner_20230411-143630-utc.log Runner_20230412-024554-utc.log Runner_20230412-143634-utc.log Runner_20230412-191739-utc.log Runner_20230413-023752-utc.log Runner_20230413-145114-utc.log Runner_20230413-233209-utc.log Runner_20230414-143341-utc.log Runner_20230415-025836-utc.log Runner_20230415-144725-utc.log Runner_20230416-024150-utc.log Runner_20230416-143805-utc.log Runner_20230417-034412-utc.log Runner_20230417-142339-utc.log Runner_20230417-201542-utc.log Runner_20230418-023759-utc.log Runner_20230501-200217-utc.log Runner_20230504-145002-utc.log Worker_20230406-144359-utc.log Worker_20230406-234520-utc.log Worker_20230407-023155-utc.log Worker_20230407-051413-utc.log Worker_20230407-134056-utc.log Worker_20230407-165955-utc.log Worker_20230407-201644-utc.log Worker_20230408-015557-utc.log Worker_20230408-023804-utc.log Worker_20230408-052156-utc.log Worker_20230408-142507-utc.log Worker_20230408-224305-utc.log Worker_20230409-015812-utc.log Worker_20230409-062649-utc.log Worker_20230409-095822-utc.log Worker_20230410-022855-utc.log Worker_20230410-150126-utc.log Worker_20230411-025320-utc.log Worker_20230411-143636-utc.log Worker_20230412-024600-utc.log Worker_20230412-143641-utc.log Worker_20230412-191745-utc.log Worker_20230413-023759-utc.log Worker_20230413-145120-utc.log Worker_20230413-173503-utc.log Worker_20230413-233214-utc.log Worker_20230414-021353-utc.log Worker_20230414-053346-utc.log Worker_20230414-081549-utc.log Worker_20230414-104209-utc.log Worker_20230414-143347-utc.log Worker_20230415-025842-utc.log Worker_20230415-144731-utc.log Worker_20230416-024156-utc.log Worker_20230416-143811-utc.log Worker_20230417-034418-utc.log Worker_20230417-142345-utc.log Worker_20230417-201546-utc.log Worker_20230418-023805-utc.log Worker_20230418-051238-utc.log Worker_20230418-081615-utc.log Worker_20230418-113249-utc.log Worker_20230418-150335-utc.log Worker_20230418-181502-utc.log Worker_20230418-212200-utc.log Worker_20230419-050230-utc.log Worker_20230419-073336-utc.log Worker_20230419-120813-utc.log Worker_20230419-145522-utc.log Worker_20230419-173001-utc.log Worker_20230419-205701-utc.log Worker_20230419-231434-utc.log Worker_20230420-012802-utc.log Worker_20230420-034743-utc.log Worker_20230420-054441-utc.log Worker_20230420-075911-utc.log Worker_20230420-141325-utc.log Worker_20230420-161226-utc.log Worker_20230420-193455-utc.log Worker_20230420-202211-utc.log Worker_20230421-023143-utc.log Worker_20230421-044251-utc.log Worker_20230421-141208-utc.log Worker_20230421-164841-utc.log Worker_20230421-194250-utc.log Worker_20230422-010144-utc.log Worker_20230422-030933-utc.log Worker_20230422-064239-utc.log Worker_20230422-102156-utc.log Worker_20230422-124726-utc.log Worker_20230422-151106-utc.log Worker_20230422-182634-utc.log Worker_20230423-004154-utc.log Worker_20230423-190216-utc.log Worker_20230424-023049-utc.log Worker_20230424-045106-utc.log Worker_20230424-083730-utc.log Worker_20230424-104607-utc.log Worker_20230424-141310-utc.log Worker_20230424-172240-utc.log Worker_20230425-022831-utc.log Worker_20230425-075133-utc.log Worker_20230425-105413-utc.log Worker_20230425-151100-utc.log Worker_20230425-174141-utc.log Worker_20230425-200127-utc.log Worker_20230425-221507-utc.log Worker_20230426-003459-utc.log Worker_20230426-025434-utc.log Worker_20230426-141028-utc.log Worker_20230427-022910-utc.log Worker_20230427-043045-utc.log Worker_20230427-072020-utc.log Worker_20230427-093502-utc.log Worker_20230427-141114-utc.log Worker_20230427-162312-utc.log Worker_20230427-192125-utc.log Worker_20230428-023036-utc.log Worker_20230428-065359-utc.log Worker_20230428-065936-utc.log Worker_20230428-141032-utc.log Worker_20230428-141610-utc.log Worker_20230428-145812-utc.log Worker_20230428-184744-utc.log Worker_20230428-212013-utc.log Worker_20230429-005725-utc.log Worker_20230429-023007-utc.log Worker_20230429-023539-utc.log Worker_20230429-024119-utc.log Worker_20230429-140958-utc.log Worker_20230429-141543-utc.log Worker_20230429-164638-utc.log Worker_20230429-165210-utc.log Worker_20230430-023008-utc.log Worker_20230430-023547-utc.log Worker_20230430-140952-utc.log Worker_20230430-162955-utc.log Worker_20230501-022945-utc.log Worker_20230501-074531-utc.log Worker_20230501-141035-utc.log Worker_20230501-181425-utc.log Worker_20230501-194307-utc.log Worker_20230501-201757-utc.log Worker_20230502-004907-utc.log Worker_20230502-051924-utc.log Worker_20230502-073147-utc.log Worker_20230502-141113-utc.log Worker_20230502-163800-utc.log Worker_20230502-185134-utc.log Worker_20230502-220132-utc.log Worker_20230503-072633-utc.log Worker_20230503-095013-utc.log Worker_20230503-115558-utc.log Worker_20230503-142040-utc.log Worker_20230503-185500-utc.log Worker_20230503-211905-utc.log Worker_20230503-235153-utc.log Worker_20230504-072540-utc.log Worker_20230504-101455-utc.log Worker_20230504-145005-utc.log Worker_20230504-215040-utc.log Worker_20230505-021202-utc.log Worker_20230505-045317-utc.log Worker_20230505-080039-utc.log Worker_20230505-123251-utc.log Worker_20230505-160820-utc.log Worker_20230505-232106-utc.log Worker_20230506-033323-utc.log Worker_20230506-072542-utc.log Worker_20230506-101728-utc.log 2023-05-06T10:24:53.9588269Z + LOG_FILE=Worker_20230506-101728-utc.log 2023-05-06T10:24:53.9588641Z + '[' -n Worker_20230506-101728-utc.log ']' 2023-05-06T10:24:53.9595491Z ++ grep -r '"jobDisplayName"' Worker_20230506-101728-utc.log 2023-05-06T10:24:53.9596215Z ++ awk -F '[:,]' '{print $2}' 2023-05-06T10:24:53.9597170Z ++ sed 's/"//g' 2023-05-06T10:24:53.9600333Z ++ xargs 2023-05-06T10:24:53.9633307Z + JOB_NAME='cuda11.8-py3.10-gcc7-sm80 / test (inductor_torchbench_perf' 2023-05-06T10:24:53.9633831Z + echo 'job-name=cuda11.8-py3.10-gcc7-sm80 / test (inductor_torchbench_perf' 2023-05-06T10:24:53.9634111Z + popd 2023-05-06T10:24:53.9634813Z /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:24:53.9656420Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2023-05-06T10:24:53.9656757Z echo "Workflow: ${GITHUB_WORKFLOW}" 2023-05-06T10:24:53.9657021Z echo "Job name: ${JOB_NAME}" 2023-05-06T10:24:53.9657228Z  2023-05-06T10:24:53.9657477Z .github/scripts/filter_test_configs.py \ 2023-05-06T10:24:53.9657811Z  --workflow "${GITHUB_WORKFLOW}" \ 2023-05-06T10:24:53.9658066Z  --job-name "${JOB_NAME}" \ 2023-05-06T10:24:53.9659507Z  --test-matrix "{"include": [{"config": "inductor_huggingface_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 1, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 2, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 3, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 4, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 5, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 6, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}]}" \ 2023-05-06T10:24:53.9661050Z  --pr-number "" \ 2023-05-06T10:24:53.9661275Z  --tag "" \ 2023-05-06T10:24:53.9661509Z  --event-name "schedule" \ 2023-05-06T10:24:53.9661741Z  --schedule "0 7 * * *" 2023-05-06T10:24:53.9680033Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:53.9680516Z env: 2023-05-06T10:24:53.9680711Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:53.9680943Z GPU_FLAG: --gpus all 2023-05-06T10:24:53.9681369Z GITHUB_TOKEN: *** 2023-05-06T10:24:53.9681669Z JOB_NAME: cuda11.8-py3.10-gcc7-sm80 / test (inductor_torchbench_perf 2023-05-06T10:24:53.9681949Z ##[endgroup] 2023-05-06T10:24:53.9719736Z Workflow: inductor-A100-perf-nightly 2023-05-06T10:24:53.9720622Z Job name: cuda11.8-py3.10-gcc7-sm80 / test (inductor_torchbench_perf 2023-05-06T10:24:54.2409766Z ##[group]Run echo "{"include": [{"config": "inductor_huggingface_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 1, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 2, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 3, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 4, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 5, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 6, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}]}" 2023-05-06T10:24:54.2413226Z echo "{"include": [{"config": "inductor_huggingface_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_huggingface_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 1, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 2, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 3, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 4, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 5, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_timm_perf", "shard": 6, "num_shards": 6, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 1, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 2, "num_shards": 3, "runner": "linux.gcp.a100.large"}, {"config": "inductor_torchbench_perf", "shard": 3, "num_shards": 3, "runner": "linux.gcp.a100.large"}]}" 2023-05-06T10:24:54.2434815Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:54.2435151Z env: 2023-05-06T10:24:54.2435496Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:54.2435806Z GPU_FLAG: --gpus all 2023-05-06T10:24:54.2436107Z ##[endgroup] 2023-05-06T10:24:54.2478669Z {include: [{config: inductor_huggingface_perf, shard: 1, num_shards: 3, runner: linux.gcp.a100.large}, {config: inductor_huggingface_perf, shard: 2, num_shards: 3, runner: linux.gcp.a100.large}, {config: inductor_huggingface_perf, shard: 3, num_shards: 3, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 1, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 2, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 3, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 4, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 5, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_timm_perf, shard: 6, num_shards: 6, runner: linux.gcp.a100.large}, {config: inductor_torchbench_perf, shard: 1, num_shards: 3, runner: linux.gcp.a100.large}, {config: inductor_torchbench_perf, shard: 2, num_shards: 3, runner: linux.gcp.a100.large}, {config: inductor_torchbench_perf, shard: 3, num_shards: 3, runner: linux.gcp.a100.large}]} 2023-05-06T10:24:54.2642672Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2023-05-06T10:24:54.2643030Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2023-05-06T10:24:54.2660909Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T10:24:54.2661158Z env: 2023-05-06T10:24:54.2661364Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:54.2661597Z GPU_FLAG: --gpus all 2023-05-06T10:24:54.2661802Z JOB_TIMEOUT: 720 2023-05-06T10:24:54.2662009Z ##[endgroup] 2023-05-06T10:24:54.2755373Z ##[group]Run set -x 2023-05-06T10:24:54.2755920Z set -x 2023-05-06T10:24:54.2756200Z  2023-05-06T10:24:54.2756437Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2023-05-06T10:24:54.2756948Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2023-05-06T10:24:54.2757245Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2023-05-06T10:24:54.2757527Z  TEST_COMMAND=.ci/onnx/test.sh 2023-05-06T10:24:54.2757762Z else 2023-05-06T10:24:54.2757986Z  TEST_COMMAND=.ci/pytorch/test.sh 2023-05-06T10:24:54.2758223Z fi 2023-05-06T10:24:54.2758411Z  2023-05-06T10:24:54.2758681Z COMMIT_MESSAGES=$(git cherry -v "origin/${GIT_DEFAULT_BRANCH:-main}") 2023-05-06T10:24:54.2758954Z  2023-05-06T10:24:54.2759203Z # sanitize the input commit message and PR body here: 2023-05-06T10:24:54.2759448Z # 2023-05-06T10:24:54.2759805Z # trim all new lines from commit messages + PR_BODY to avoid issues with batch environment 2023-05-06T10:24:54.2760264Z # variable copying. see https://github.com/pytorch/pytorch/pull/80043#issuecomment-1167796028 2023-05-06T10:24:54.2760642Z COMMIT_MESSAGES="${COMMIT_MESSAGES//[$'\n\r']}" 2023-05-06T10:24:54.2760915Z PR_BODY="${PR_BODY//[$'\n\r']}" 2023-05-06T10:24:54.2761137Z  2023-05-06T10:24:54.2761437Z # then trim all special characters like single and double quotes to avoid unescaped inputs to 2023-05-06T10:24:54.2761773Z # wreak havoc internally 2023-05-06T10:24:54.2762058Z export COMMIT_MESSAGES="${COMMIT_MESSAGES//[\'\"]}" 2023-05-06T10:24:54.2762339Z export PR_BODY="${PR_BODY//[\'\"]}" 2023-05-06T10:24:54.2762566Z  2023-05-06T10:24:54.2762839Z # detached container should get cleaned up by teardown_ec2_linux 2023-05-06T10:24:54.2763196Z # TODO: Stop building test binaries as part of the build phase 2023-05-06T10:24:54.2763519Z # Used for GPU_FLAG since that doesn't play nice 2023-05-06T10:24:54.2763846Z # shellcheck disable=SC2086,SC2090 2023-05-06T10:24:54.2764120Z container_name=$(docker run \ 2023-05-06T10:24:54.2764355Z  ${GPU_FLAG:-} \ 2023-05-06T10:24:54.2764583Z  -e BUILD_ENVIRONMENT \ 2023-05-06T10:24:54.2764827Z  -e PR_NUMBER \ 2023-05-06T10:24:54.2765046Z  -e GITHUB_ACTIONS \ 2023-05-06T10:24:54.2765274Z  -e BASE_SHA \ 2023-05-06T10:24:54.2765489Z  -e BRANCH \ 2023-05-06T10:24:54.2765701Z  -e SHA1 \ 2023-05-06T10:24:54.2765963Z  -e AWS_DEFAULT_REGION \ 2023-05-06T10:24:54.2766203Z  -e IN_WHEEL_TEST \ 2023-05-06T10:24:54.2766432Z  -e SHARD_NUMBER \ 2023-05-06T10:24:54.2766644Z  -e TEST_CONFIG \ 2023-05-06T10:24:54.2766875Z  -e NUM_TEST_SHARDS \ 2023-05-06T10:24:54.2767096Z  -e PR_BODY \ 2023-05-06T10:24:54.2767313Z  -e COMMIT_MESSAGES \ 2023-05-06T10:24:54.2767559Z  -e CONTINUE_THROUGH_ERROR \ 2023-05-06T10:24:54.2767822Z  -e PYTORCH_RETRY_TEST_CASES \ 2023-05-06T10:24:54.2768323Z  -e PYTORCH_OVERRIDE_FLAKY_SIGNAL \ 2023-05-06T10:24:54.2768574Z  -e PR_LABELS \ 2023-05-06T10:24:54.2768828Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2023-05-06T10:24:54.2769067Z  -e SCCACHE_BUCKET \ 2023-05-06T10:24:54.2769314Z  -e SCCACHE_S3_KEY_PREFIX \ 2023-05-06T10:24:54.2769549Z  -e XLA_CUDA \ 2023-05-06T10:24:54.2769790Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2023-05-06T10:24:54.2770063Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2023-05-06T10:24:54.2770351Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2023-05-06T10:24:54.2770645Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2023-05-06T10:24:54.2770961Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2023-05-06T10:24:54.2771407Z  --ulimit stack=10485760:83886080 \ 2023-05-06T10:24:54.2771686Z  --security-opt seccomp=unconfined \ 2023-05-06T10:24:54.2771946Z  --cap-add=SYS_PTRACE \ 2023-05-06T10:24:54.2772186Z  --ipc=host \ 2023-05-06T10:24:54.2772417Z  --shm-size="${SHM_SIZE}" \ 2023-05-06T10:24:54.2772644Z  --tty \ 2023-05-06T10:24:54.2772841Z  --detach \ 2023-05-06T10:24:54.2773076Z  --name="${container_name}" \ 2023-05-06T10:24:54.2773315Z  --user jenkins \ 2023-05-06T10:24:54.2773586Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2023-05-06T10:24:54.2773889Z  -w /var/lib/jenkins/workspace \ 2023-05-06T10:24:54.2774137Z  "${DOCKER_IMAGE}" 2023-05-06T10:24:54.2774335Z ) 2023-05-06T10:24:54.2774606Z # Propagate download.pytorch.org IP to container 2023-05-06T10:24:54.2775021Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2023-05-06T10:24:54.2775432Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2023-05-06T10:24:54.2775861Z docker exec -t "${container_name}" sh -c "pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2023-05-06T10:24:54.2793015Z shell: /usr/bin/bash -e {0} 2023-05-06T10:24:54.2793242Z env: 2023-05-06T10:24:54.2793440Z GIT_DEFAULT_BRANCH: main 2023-05-06T10:24:54.2793676Z GPU_FLAG: --gpus all 2023-05-06T10:24:54.2793985Z BUILD_ENVIRONMENT: linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:24:54.2794261Z PR_NUMBER: 2023-05-06T10:24:54.2794465Z BRANCH: main 2023-05-06T10:24:54.2794711Z SHA1: d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:24:54.2795002Z BASE_SHA: d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:24:54.2795256Z PYTORCH_RETRY_TEST_CASES: 1 2023-05-06T10:24:54.2795507Z PYTORCH_OVERRIDE_FLAKY_SIGNAL: 1 2023-05-06T10:24:54.2795775Z TEST_CONFIG: inductor_torchbench_perf 2023-05-06T10:24:54.2796066Z SHARD_NUMBER: 2 2023-05-06T10:24:54.2796279Z NUM_TEST_SHARDS: 3 2023-05-06T10:24:54.2796489Z PR_BODY: 2023-05-06T10:24:54.2796829Z CONTINUE_THROUGH_ERROR: False 2023-05-06T10:24:54.2797138Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2023-05-06T10:24:54.2797481Z SCCACHE_S3_KEY_PREFIX: inductor-A100-perf-nightly 2023-05-06T10:24:54.2797732Z SHM_SIZE: 2g 2023-05-06T10:24:54.2798181Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:24:54.2798620Z XLA_CUDA: 2023-05-06T10:24:54.2798936Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2023-05-06T10:24:54.2799265Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2023-05-06T10:24:54.2799536Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2023-05-06T10:24:54.2799773Z ##[endgroup] 2023-05-06T10:24:54.2836779Z + [[ inductor_torchbench_perf == \m\u\l\t\i\g\p\u ]] 2023-05-06T10:24:54.2838020Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *onnx* ]] 2023-05-06T10:24:54.2838540Z + TEST_COMMAND=.ci/pytorch/test.sh 2023-05-06T10:24:54.2841334Z ++ git cherry -v origin/main 2023-05-06T10:24:54.2890615Z + COMMIT_MESSAGES= 2023-05-06T10:24:54.2891251Z + COMMIT_MESSAGES= 2023-05-06T10:24:54.2891627Z + PR_BODY= 2023-05-06T10:24:54.2892029Z + export COMMIT_MESSAGES= 2023-05-06T10:24:54.2892422Z + COMMIT_MESSAGES= 2023-05-06T10:24:54.2892813Z + export PR_BODY= 2023-05-06T10:24:54.2893187Z + PR_BODY= 2023-05-06T10:24:54.2904564Z +++ nproc --ignore=2 2023-05-06T10:24:54.2917420Z ++ docker run --gpus all -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e PR_BODY -e COMMIT_MESSAGES -e CONTINUE_THROUGH_ERROR -e PYTORCH_RETRY_TEST_CASES -e PYTORCH_OVERRIDE_FLAKY_SIGNAL -e PR_LABELS -e MAX_JOBS=10 -e SCCACHE_BUCKET -e SCCACHE_S3_KEY_PREFIX -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 --env-file=/tmp/github_env_4900301301 --ulimit stack=10485760:83886080 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=2g --tty --detach --name= --user jenkins -v /home/weiwangmeta/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T10:24:59.0048907Z + container_name=75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T10:24:59.0049402Z + grep download.pytorch.org /etc/hosts 2023-05-06T10:24:59.0052300Z + docker exec -i 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c sudo bash -c '/bin/cat >> /etc/hosts' 2023-05-06T10:24:59.0950631Z + echo DOCKER_CONTAINER_ID=75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T10:24:59.0956065Z ++ echo dist/torch-2.1.0a0+gitd719f02-cp310-cp310-linux_x86_64.whl 2023-05-06T10:24:59.0958565Z + docker exec -t 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c sh -c 'pip install dist/torch-2.1.0a0+gitd719f02-cp310-cp310-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2023-05-06T10:24:59.6430834Z Processing ./dist/torch-2.1.0a0+gitd719f02-cp310-cp310-linux_x86_64.whl 2023-05-06T10:25:00.5342739Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (3.1.2) 2023-05-06T10:25:00.5344299Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (2.8.8) 2023-05-06T10:25:00.5349108Z Requirement already satisfied: typing-extensions in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (4.5.0) 2023-05-06T10:25:00.5353856Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (3.9.0) 2023-05-06T10:25:00.5360572Z Requirement already satisfied: sympy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (1.11.1) 2023-05-06T10:25:00.5364670Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (2023.4.0) 2023-05-06T10:25:00.5377371Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.1.0a0+gitd719f02) (3.3.0) 2023-05-06T10:25:00.5446318Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from opt-einsum>=3.3->torch==2.1.0a0+gitd719f02) (1.21.2) 2023-05-06T10:25:00.5960117Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch==2.1.0a0+gitd719f02) (2.1.2) 2023-05-06T10:25:00.6138466Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy->torch==2.1.0a0+gitd719f02) (1.3.0) 2023-05-06T10:25:01.4062419Z Installing collected packages: torch 2023-05-06T10:25:10.7519344Z Successfully installed torch-2.1.0a0+gitd719f02 2023-05-06T10:25:10.8562174Z + echo 'Environment variables:' 2023-05-06T10:25:10.8562483Z Environment variables: 2023-05-06T10:25:10.8562706Z + env 2023-05-06T10:25:10.8572529Z SHARD_NUMBER=2 2023-05-06T10:25:10.8573082Z NV_LIBCUBLAS_DEV_VERSION=11.11.3.6-1 2023-05-06T10:25:10.8573612Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-8 2023-05-06T10:25:10.8573937Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-05-06T10:25:10.8574351Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.15.5-1+cuda11.8 2023-05-06T10:25:10.8577106Z UCC_HOME=/usr 2023-05-06T10:25:10.8577854Z BUILD_ENVIRONMENT=linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:25:10.8578378Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2023-05-06T10:25:10.8578820Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-8=11.8.0.86-1 2023-05-06T10:25:10.8579261Z INSTALLED_DB=yes 2023-05-06T10:25:10.8580050Z HOSTNAME=75a0724c4dd3 2023-05-06T10:25:10.8580374Z GITHUB_REF_NAME=main 2023-05-06T10:25:10.8581040Z GITHUB_API_URL=https://api.github.com 2023-05-06T10:25:10.8581315Z GITHUB_REPOSITORY_OWNER_ID=21003710 2023-05-06T10:25:10.8581621Z OPENSSL_DIR=/opt/openssl 2023-05-06T10:25:10.8582070Z UCC_COMMIT=7cb07a76ccedad7e56ceb136b865eb9319c258ea 2023-05-06T10:25:10.8583264Z GITHUB_STEP_SUMMARY=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/step_summary_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8583651Z CUDA_PATH=/usr/local/cuda 2023-05-06T10:25:10.8584147Z GITHUB_ACTION_PATH=/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2023-05-06T10:25:10.8584501Z GITHUB_RUN_ATTEMPT=1 2023-05-06T10:25:10.8584741Z TEST_CONFIG=inductor_torchbench_perf 2023-05-06T10:25:10.8585034Z NV_LIBNPP_VERSION=11.8.0.86-1 2023-05-06T10:25:10.8585391Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-8=11.8.87-1 2023-05-06T10:25:10.8585665Z GITHUB_REPOSITORY_OWNER=pytorch 2023-05-06T10:25:10.8585922Z GITHUB_ACTIONS=true 2023-05-06T10:25:10.8586154Z NVIDIA_VISIBLE_DEVICES=all 2023-05-06T10:25:10.8586407Z NV_NVPROF_VERSION=11.8.87-1 2023-05-06T10:25:10.8586895Z NV_LIBCUSPARSE_VERSION=11.7.5.86-1 2023-05-06T10:25:10.8587753Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-perf-test-nightly.yml@refs/heads/main 2023-05-06T10:25:10.8588393Z NVIDIA_PRODUCT_NAME=CUDA 2023-05-06T10:25:10.8588734Z CI=true 2023-05-06T10:25:10.8589166Z PYTORCH_OVERRIDE_FLAKY_SIGNAL=1 2023-05-06T10:25:10.8589891Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-8=11.11.3.6-1 2023-05-06T10:25:10.8590460Z BRANCH=main 2023-05-06T10:25:10.8590857Z GITHUB_HEAD_REF= 2023-05-06T10:25:10.8591360Z UCX_COMMIT=00bcc6bb18fc282eb160623b4c0d300147f579af 2023-05-06T10:25:10.8591736Z GITHUB_ACTOR=pytorchmergebot 2023-05-06T10:25:10.8592047Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2023-05-06T10:25:10.8592505Z GITHUB_ACTION_REF= 2023-05-06T10:25:10.8592948Z NCCL_VERSION=2.15.5-1 2023-05-06T10:25:10.8593331Z GITHUB_ACTION=__self 2023-05-06T10:25:10.8593826Z GITHUB_REF_PROTECTED=true 2023-05-06T10:25:10.8594540Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2023-05-06T10:25:10.8594886Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2023-05-06T10:25:10.8598519Z *** 2023-05-06T10:25:10.8598765Z INSTALLED_VISION=yes 2023-05-06T10:25:10.8598973Z NVARCH=x86_64 2023-05-06T10:25:10.8599321Z NV_LIBCUSPARSE_DEV_VERSION=11.7.5.86-1 2023-05-06T10:25:10.8599574Z HOME=/var/lib/jenkins 2023-05-06T10:25:10.8600101Z GITHUB_STATE=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/save_state_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8600530Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2023-05-06T10:25:10.8600791Z GITHUB_ACTION_REPOSITORY= 2023-05-06T10:25:10.8601018Z GITHUB_REF_TYPE=branch 2023-05-06T10:25:10.8601301Z NV_LIBNCCL_PACKAGE_VERSION=2.15.5-1 2023-05-06T10:25:10.8601554Z GITHUB_RETENTION_DAYS=90 2023-05-06T10:25:10.8601903Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2023-05-06T10:25:10.8602305Z NV_LIBNCCL_PACKAGE=libnccl2=2.15.5-1+cuda11.8 2023-05-06T10:25:10.8602849Z GITHUB_ENV=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/set_env_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8603581Z DEBIAN_FRONTEND=noninteractive 2023-05-06T10:25:10.8603996Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2023-05-06T10:25:10.8604260Z GITHUB_REF=refs/heads/main 2023-05-06T10:25:10.8604534Z NV_CUDA_LIB_VERSION=11.8.0-1 2023-05-06T10:25:10.8604798Z GITHUB_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8605070Z INSTALLED_PROTOBUF=yes 2023-05-06T10:25:10.8605316Z ANACONDA_PYTHON_VERSION=3.10 2023-05-06T10:25:10.8605565Z GITHUB_REPOSITORY_ID=65600975 2023-05-06T10:25:10.8605791Z GITHUB_RUN_ID=4900301301 2023-05-06T10:25:10.8606113Z NV_LIBNPP_PACKAGE=libnpp-11-8=11.8.0.86-1 2023-05-06T10:25:10.8606386Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2023-05-06T10:25:10.8606635Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2023-05-06T10:25:10.8606915Z NV_NVTX_VERSION=11.8.86-1 2023-05-06T10:25:10.8607284Z CONTINUE_THROUGH_ERROR=False 2023-05-06T10:25:10.8607553Z GITHUB_SERVER_URL=https://github.com 2023-05-06T10:25:10.8607805Z MAX_JOBS=10 2023-05-06T10:25:10.8608026Z GITHUB_ACTOR_ID=97764156 2023-05-06T10:25:10.8608295Z NV_LIBCUBLAS_VERSION=11.11.3.6-1 2023-05-06T10:25:10.8608646Z NV_LIBCUBLAS_PACKAGE=libcublas-11-8=11.11.3.6-1 2023-05-06T10:25:10.8609125Z GITHUB_EVENT_PATH=/home/weiwangmeta/actions-runner/_work/_temp/_github_workflow/event.json 2023-05-06T10:25:10.8609427Z UCX_HOME=/usr 2023-05-06T10:25:10.8609652Z PYTORCH_RETRY_TEST_CASES=1 2023-05-06T10:25:10.8609949Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2023-05-06T10:25:10.8610246Z BASE_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8610615Z NV_CUDA_CUDART_DEV_VERSION=11.8.89-1 2023-05-06T10:25:10.8610853Z PR_BODY= 2023-05-06T10:25:10.8611042Z GITHUB_BASE_REF= 2023-05-06T10:25:10.8611250Z TERM=xterm 2023-05-06T10:25:10.8611449Z XLA_CUDA= 2023-05-06T10:25:10.8611679Z NV_NVML_DEV_VERSION=11.8.86-1 2023-05-06T10:25:10.8611929Z TORCH_CUDA_ARCH_LIST=Maxwell 2023-05-06T10:25:10.8612160Z CUDA_VERSION=11.8.0 2023-05-06T10:25:10.8612465Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-8 2023-05-06T10:25:10.8612743Z OPENSSL_ROOT_DIR=/opt/openssl 2023-05-06T10:25:10.8613280Z GITHUB_PATH=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/add_path_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8613636Z GITHUB_JOB=test 2023-05-06T10:25:10.8613973Z SCCACHE_S3_KEY_PREFIX=inductor-A100-perf-nightly 2023-05-06T10:25:10.8614244Z COMMIT_MESSAGES= 2023-05-06T10:25:10.8614500Z NVIDIA_DRIVER_CAPABILITIES=compute,utility 2023-05-06T10:25:10.8614742Z NUM_TEST_SHARDS=3 2023-05-06T10:25:10.8614951Z PR_NUMBER= 2023-05-06T10:25:10.8615465Z GITHUB_OUTPUT=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/set_output_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8615798Z SHLVL=1 2023-05-06T10:25:10.8616117Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-8 2023-05-06T10:25:10.8616413Z GITHUB_REPOSITORY=pytorch/pytorch 2023-05-06T10:25:10.8617922Z NVIDIA_REQUIRE_CUDA=cuda>=11.8 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=tesla,driver>=515,driver<516 brand=unknown,driver>=515,driver<516 brand=nvidia,driver>=515,driver<516 brand=nvidiartx,driver>=515,driver<516 brand=geforce,driver>=515,driver<516 brand=geforcertx,driver>=515,driver<516 brand=quadro,driver>=515,driver<516 brand=quadrortx,driver>=515,driver<516 brand=titan,driver>=515,driver<516 brand=titanrtx,driver>=515,driver<516 2023-05-06T10:25:10.8619633Z NV_LIBNPP_DEV_VERSION=11.8.0.86-1 2023-05-06T10:25:10.8619899Z SHA1=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8620160Z GITHUB_EVENT_NAME=schedule 2023-05-06T10:25:10.8620483Z NV_CUDA_CUDART_VERSION=11.8.89-1 2023-05-06T10:25:10.8620801Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2023-05-06T10:25:10.8621059Z GITHUB_RUN_NUMBER=622 2023-05-06T10:25:10.8621399Z GITHUB_WORKFLOW=inductor-A100-perf-nightly 2023-05-06T10:25:10.8621832Z PATH=/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-05-06T10:25:10.8622272Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.15.5-1 2023-05-06T10:25:10.8622581Z GITHUB_WORKFLOW_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8623123Z GITHUB_WORKSPACE=/home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:25:10.8623448Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2023-05-06T10:25:10.8623724Z SKIP_SCCACHE_INITIALIZATION=1 2023-05-06T10:25:10.8623966Z _=/usr/bin/env 2023-05-06T10:25:10.8624322Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2023-05-06T10:25:10.8793485Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch 2023-05-06T10:25:10.8794191Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin 2023-05-06T10:25:10.8794679Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib 2023-05-06T10:25:10.8795170Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/test 2023-05-06T10:25:10.8795467Z + BUILD_DIR=build 2023-05-06T10:25:10.8795706Z + BUILD_RENAMED_DIR=build_renamed 2023-05-06T10:25:10.8795991Z + BUILD_BIN_DIR=build/bin 2023-05-06T10:25:10.8796223Z + export VALGRIND=ON 2023-05-06T10:25:10.8796438Z + VALGRIND=ON 2023-05-06T10:25:10.8796968Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *clang9* ]] 2023-05-06T10:25:10.8797412Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 != *bazel* ]] 2023-05-06T10:25:10.8802232Z ++ realpath build/custom_test_artifacts 2023-05-06T10:25:10.8814049Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2023-05-06T10:25:10.8819254Z ++ dirname .ci/pytorch/test.sh 2023-05-06T10:25:10.8829358Z + source .ci/pytorch/common.sh 2023-05-06T10:25:10.8834439Z +++ dirname .ci/pytorch/common.sh 2023-05-06T10:25:10.8848302Z ++ source .ci/pytorch/common_utils.sh 2023-05-06T10:25:10.8849980Z +++ declare -f -t trap_add 2023-05-06T10:25:10.8857195Z ++ set -ex 2023-05-06T10:25:10.8857833Z ++ [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *rocm* ]] 2023-05-06T10:25:10.8858127Z ++ BUILD_TEST_LIBTORCH=0 2023-05-06T10:25:10.8858441Z + echo 'Environment variables' 2023-05-06T10:25:10.8858686Z Environment variables 2023-05-06T10:25:10.8858886Z + env 2023-05-06T10:25:10.8867222Z SHARD_NUMBER=2 2023-05-06T10:25:10.8867824Z NV_LIBCUBLAS_DEV_VERSION=11.11.3.6-1 2023-05-06T10:25:10.8868517Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-11-8 2023-05-06T10:25:10.8869211Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-05-06T10:25:10.8869780Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.15.5-1+cuda11.8 2023-05-06T10:25:10.8870028Z UCC_HOME=/usr 2023-05-06T10:25:10.8870637Z BUILD_ENVIRONMENT=linux-bionic-cuda11.8-py3.10-gcc7-sm80 2023-05-06T10:25:10.8871125Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2023-05-06T10:25:10.8871679Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-11-8=11.8.0.86-1 2023-05-06T10:25:10.8872102Z INSTALLED_DB=yes 2023-05-06T10:25:10.8872541Z HOSTNAME=75a0724c4dd3 2023-05-06T10:25:10.8872870Z GITHUB_REF_NAME=main 2023-05-06T10:25:10.8873285Z GITHUB_API_URL=https://api.github.com 2023-05-06T10:25:10.8873773Z GITHUB_REPOSITORY_OWNER_ID=21003710 2023-05-06T10:25:10.8874273Z OPENSSL_DIR=/opt/openssl 2023-05-06T10:25:10.8874820Z UCC_COMMIT=7cb07a76ccedad7e56ceb136b865eb9319c258ea 2023-05-06T10:25:10.8875727Z GITHUB_STEP_SUMMARY=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/step_summary_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8876558Z CUDA_PATH=/usr/local/cuda 2023-05-06T10:25:10.8877587Z GITHUB_ACTION_PATH=/home/weiwangmeta/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2023-05-06T10:25:10.8877943Z GITHUB_RUN_ATTEMPT=1 2023-05-06T10:25:10.8878198Z TEST_CONFIG=inductor_torchbench_perf 2023-05-06T10:25:10.8878480Z NV_LIBNPP_VERSION=11.8.0.86-1 2023-05-06T10:25:10.8878838Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-11-8=11.8.87-1 2023-05-06T10:25:10.8879123Z GITHUB_REPOSITORY_OWNER=pytorch 2023-05-06T10:25:10.8879526Z GITHUB_ACTIONS=true 2023-05-06T10:25:10.8879878Z NVIDIA_VISIBLE_DEVICES=all 2023-05-06T10:25:10.8880390Z NV_NVPROF_VERSION=11.8.87-1 2023-05-06T10:25:10.8880960Z NV_LIBCUSPARSE_VERSION=11.7.5.86-1 2023-05-06T10:25:10.8881579Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-perf-test-nightly.yml@refs/heads/main 2023-05-06T10:25:10.8882154Z NVIDIA_PRODUCT_NAME=CUDA 2023-05-06T10:25:10.8882380Z CI=true 2023-05-06T10:25:10.8882601Z PYTORCH_OVERRIDE_FLAKY_SIGNAL=1 2023-05-06T10:25:10.8882979Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-11-8=11.11.3.6-1 2023-05-06T10:25:10.8883256Z BRANCH=main 2023-05-06T10:25:10.8883464Z GITHUB_HEAD_REF= 2023-05-06T10:25:10.8883718Z UCX_COMMIT=00bcc6bb18fc282eb160623b4c0d300147f579af 2023-05-06T10:25:10.8884005Z GITHUB_ACTOR=pytorchmergebot 2023-05-06T10:25:10.8884290Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2023-05-06T10:25:10.8884539Z GITHUB_ACTION_REF= 2023-05-06T10:25:10.8884787Z NCCL_VERSION=2.15.5-1 2023-05-06T10:25:10.8885013Z GITHUB_ACTION=__self 2023-05-06T10:25:10.8885214Z VALGRIND=ON 2023-05-06T10:25:10.8885434Z GITHUB_REF_PROTECTED=true 2023-05-06T10:25:10.8885864Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2023-05-06T10:25:10.8886258Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2023-05-06T10:25:10.8886676Z *** 2023-05-06T10:25:10.8886882Z INSTALLED_VISION=yes 2023-05-06T10:25:10.8887108Z NVARCH=x86_64 2023-05-06T10:25:10.8887371Z NV_LIBCUSPARSE_DEV_VERSION=11.7.5.86-1 2023-05-06T10:25:10.8887620Z HOME=/var/lib/jenkins 2023-05-06T10:25:10.8888157Z GITHUB_STATE=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/save_state_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8888524Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2023-05-06T10:25:10.8888772Z GITHUB_ACTION_REPOSITORY= 2023-05-06T10:25:10.8889006Z GITHUB_REF_TYPE=branch 2023-05-06T10:25:10.8889269Z NV_LIBNCCL_PACKAGE_VERSION=2.15.5-1 2023-05-06T10:25:10.8889517Z GITHUB_RETENTION_DAYS=90 2023-05-06T10:25:10.8889877Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2023-05-06T10:25:10.8890250Z NV_LIBNCCL_PACKAGE=libnccl2=2.15.5-1+cuda11.8 2023-05-06T10:25:10.8890789Z GITHUB_ENV=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/set_env_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8891166Z DEBIAN_FRONTEND=noninteractive 2023-05-06T10:25:10.8891489Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2023-05-06T10:25:10.8891749Z GITHUB_REF=refs/heads/main 2023-05-06T10:25:10.8892018Z NV_CUDA_LIB_VERSION=11.8.0-1 2023-05-06T10:25:10.8892297Z GITHUB_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8892555Z INSTALLED_PROTOBUF=yes 2023-05-06T10:25:10.8892796Z ANACONDA_PYTHON_VERSION=3.10 2023-05-06T10:25:10.8893044Z GITHUB_REPOSITORY_ID=65600975 2023-05-06T10:25:10.8893265Z GITHUB_RUN_ID=4900301301 2023-05-06T10:25:10.8893592Z NV_LIBNPP_PACKAGE=libnpp-11-8=11.8.0.86-1 2023-05-06T10:25:10.8893864Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2023-05-06T10:25:10.8894115Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2023-05-06T10:25:10.8894396Z NV_NVTX_VERSION=11.8.86-1 2023-05-06T10:25:10.8894636Z CONTINUE_THROUGH_ERROR=False 2023-05-06T10:25:10.8894898Z GITHUB_SERVER_URL=https://github.com 2023-05-06T10:25:10.8895142Z MAX_JOBS=10 2023-05-06T10:25:10.8895355Z GITHUB_ACTOR_ID=97764156 2023-05-06T10:25:10.8895610Z NV_LIBCUBLAS_VERSION=11.11.3.6-1 2023-05-06T10:25:10.8896007Z NV_LIBCUBLAS_PACKAGE=libcublas-11-8=11.11.3.6-1 2023-05-06T10:25:10.8896490Z GITHUB_EVENT_PATH=/home/weiwangmeta/actions-runner/_work/_temp/_github_workflow/event.json 2023-05-06T10:25:10.8896789Z UCX_HOME=/usr 2023-05-06T10:25:10.8897193Z PYTORCH_RETRY_TEST_CASES=1 2023-05-06T10:25:10.8897492Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2023-05-06T10:25:10.8897794Z BASE_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8898113Z NV_CUDA_CUDART_DEV_VERSION=11.8.89-1 2023-05-06T10:25:10.8898350Z PR_BODY= 2023-05-06T10:25:10.8898537Z GITHUB_BASE_REF= 2023-05-06T10:25:10.8898744Z TERM=xterm 2023-05-06T10:25:10.8898939Z XLA_CUDA= 2023-05-06T10:25:10.8899166Z NV_NVML_DEV_VERSION=11.8.86-1 2023-05-06T10:25:10.8899413Z TORCH_CUDA_ARCH_LIST=Maxwell 2023-05-06T10:25:10.8899642Z CUDA_VERSION=11.8.0 2023-05-06T10:25:10.8899942Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-11-8 2023-05-06T10:25:10.8900213Z OPENSSL_ROOT_DIR=/opt/openssl 2023-05-06T10:25:10.8900857Z GITHUB_PATH=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/add_path_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8901226Z GITHUB_JOB=test 2023-05-06T10:25:10.8901564Z SCCACHE_S3_KEY_PREFIX=inductor-A100-perf-nightly 2023-05-06T10:25:10.8901849Z COMMIT_MESSAGES= 2023-05-06T10:25:10.8902108Z NVIDIA_DRIVER_CAPABILITIES=compute,utility 2023-05-06T10:25:10.8902366Z NUM_TEST_SHARDS=3 2023-05-06T10:25:10.8902570Z PR_NUMBER= 2023-05-06T10:25:10.8903086Z GITHUB_OUTPUT=/home/weiwangmeta/actions-runner/_work/_temp/_runner_file_commands/set_output_9168a97d-3ee8-4fb7-bfac-18c51214628a 2023-05-06T10:25:10.8903424Z SHLVL=1 2023-05-06T10:25:10.8903751Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-11-8 2023-05-06T10:25:10.8904041Z GITHUB_REPOSITORY=pytorch/pytorch 2023-05-06T10:25:10.8905540Z NVIDIA_REQUIRE_CUDA=cuda>=11.8 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=tesla,driver>=515,driver<516 brand=unknown,driver>=515,driver<516 brand=nvidia,driver>=515,driver<516 brand=nvidiartx,driver>=515,driver<516 brand=geforce,driver>=515,driver<516 brand=geforcertx,driver>=515,driver<516 brand=quadro,driver>=515,driver<516 brand=quadrortx,driver>=515,driver<516 brand=titan,driver>=515,driver<516 brand=titanrtx,driver>=515,driver<516 2023-05-06T10:25:10.8907167Z NV_LIBNPP_DEV_VERSION=11.8.0.86-1 2023-05-06T10:25:10.8907449Z SHA1=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8907710Z GITHUB_EVENT_NAME=schedule 2023-05-06T10:25:10.8907972Z NV_CUDA_CUDART_VERSION=11.8.89-1 2023-05-06T10:25:10.8908305Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2023-05-06T10:25:10.8908567Z GITHUB_RUN_NUMBER=622 2023-05-06T10:25:10.8908893Z GITHUB_WORKFLOW=inductor-A100-perf-nightly 2023-05-06T10:25:10.8909338Z PATH=/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-05-06T10:25:10.8909782Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.15.5-1 2023-05-06T10:25:10.8910077Z GITHUB_WORKFLOW_SHA=d719f0276d69a8315b65f4c4500cfc1cdaddb025 2023-05-06T10:25:10.8910525Z GITHUB_WORKSPACE=/home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T10:25:10.8910857Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2023-05-06T10:25:10.8911134Z SKIP_SCCACHE_INITIALIZATION=1 2023-05-06T10:25:10.8911353Z _=/usr/bin/env 2023-05-06T10:25:10.8911617Z + echo 'Testing pytorch' 2023-05-06T10:25:10.8911845Z Testing pytorch 2023-05-06T10:25:10.8912071Z + export LANG=C.UTF-8 2023-05-06T10:25:10.8912303Z + LANG=C.UTF-8 2023-05-06T10:25:10.8912504Z + PR_NUMBER= 2023-05-06T10:25:10.8912880Z + [[ inductor_torchbench_perf == \d\e\f\a\u\l\t ]] 2023-05-06T10:25:10.8913194Z + [[ inductor_torchbench_perf == \d\i\s\t\r\i\b\u\t\e\d ]] 2023-05-06T10:25:10.8913490Z + [[ inductor_torchbench_perf == \s\l\o\w ]] 2023-05-06T10:25:10.8913913Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *slow-gradcheck* ]] 2023-05-06T10:25:10.8914369Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *cuda* ]] 2023-05-06T10:25:10.8914692Z + export PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2023-05-06T10:25:10.8914960Z + PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2023-05-06T10:25:10.8915242Z + [[ inductor_torchbench_perf == *crossref* ]] 2023-05-06T10:25:10.8915639Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *rocm* ]] 2023-05-06T10:25:10.8916125Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 != *-bazel-* ]] 2023-05-06T10:25:10.8916574Z + pip_install --user ninja==1.10.2 2023-05-06T10:25:10.8917348Z + pip install --progress-bar off --user ninja==1.10.2 2023-05-06T10:25:11.4361876Z Collecting ninja==1.10.2 2023-05-06T10:25:11.4993428Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2023-05-06T10:25:12.3240438Z Installing collected packages: ninja 2023-05-06T10:25:12.3333642Z  WARNING: The script ninja is installed in '/var/lib/jenkins/.local/bin' which is not on PATH. 2023-05-06T10:25:12.3334311Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2023-05-06T10:25:12.3401202Z Successfully installed ninja-1.10.2 2023-05-06T10:25:12.4283378Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-05-06T10:25:12.4284385Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2023-05-06T10:25:12.4285189Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *asan* ]] 2023-05-06T10:25:12.4285743Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *-debug* ]] 2023-05-06T10:25:12.4286211Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 != *-bazel-* ]] 2023-05-06T10:25:12.4286761Z + echo 'We are not in debug mode: linux-bionic-cuda11.8-py3.10-gcc7-sm80. Expect the assertion to pass' 2023-05-06T10:25:12.4287359Z We are not in debug mode: linux-bionic-cuda11.8-py3.10-gcc7-sm80. Expect the assertion to pass 2023-05-06T10:25:12.4290063Z + cd test 2023-05-06T10:25:12.4290522Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2023-05-06T10:25:14.3161806Z + [[ inductor_torchbench_perf == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2023-05-06T10:25:14.3162421Z + [[ inductor_torchbench_perf == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2023-05-06T10:25:14.3162919Z + DYNAMO_BENCHMARK_FLAGS=() 2023-05-06T10:25:14.3163396Z + [[ inductor_torchbench_perf == *dynamo_eager* ]] 2023-05-06T10:25:14.3163933Z + [[ inductor_torchbench_perf == *aot_eager* ]] 2023-05-06T10:25:14.3164234Z + [[ inductor_torchbench_perf == *inductor* ]] 2023-05-06T10:25:14.3164516Z + [[ inductor_torchbench_perf != *perf* ]] 2023-05-06T10:25:14.3164798Z + [[ inductor_torchbench_perf == *dynamic* ]] 2023-05-06T10:25:14.3165079Z + [[ inductor_torchbench_perf == *cpu_accuracy* ]] 2023-05-06T10:25:14.3165727Z + DYNAMO_BENCHMARK_FLAGS+=(--device cuda) 2023-05-06T10:25:14.3166018Z + [[ inductor_torchbench_perf == *max_autotune* ]] 2023-05-06T10:25:14.3176020Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *tbb* ]] 2023-05-06T10:25:14.3192084Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *libtorch* ]] 2023-05-06T10:25:14.3192581Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *-bazel-* ]] 2023-05-06T10:25:14.3196196Z + cd test 2023-05-06T10:25:14.3197004Z + python -c 'import torch; print(torch.__config__.show())' 2023-05-06T10:25:15.9053336Z PyTorch built with: 2023-05-06T10:25:15.9054061Z - GCC 7.5 2023-05-06T10:25:15.9054330Z - C++ Version: 201703 2023-05-06T10:25:15.9054856Z - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2023-05-06T10:25:15.9055987Z - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e) 2023-05-06T10:25:15.9056402Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2023-05-06T10:25:15.9056744Z - LAPACK is enabled (usually provided by MKL) 2023-05-06T10:25:15.9057047Z - NNPACK is enabled 2023-05-06T10:25:15.9057336Z - CPU capability usage: AVX2 2023-05-06T10:25:15.9057603Z - CUDA Runtime 11.8 2023-05-06T10:25:15.9057972Z - NVCC architecture flags: -gencode;arch=compute_80,code=sm_80 2023-05-06T10:25:15.9058284Z - CuDNN 8.7 2023-05-06T10:25:15.9058512Z - Magma 2.6.1 2023-05-06T10:25:15.9062005Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.8, CUDNN_VERSION=8.7.0, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-unused-private-field -Wno-aligned-allocation-unavailable -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Werror -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.1.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=ON, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 2023-05-06T10:25:15.9064726Z 2023-05-06T10:25:16.2861109Z + cd test 2023-05-06T10:25:16.2861817Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2023-05-06T10:25:17.7795680Z ATen/Parallel: 2023-05-06T10:25:17.7798050Z at::get_num_threads() : 6 2023-05-06T10:25:17.7798328Z at::get_num_interop_threads() : 6 2023-05-06T10:25:17.7798592Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2023-05-06T10:25:17.7798825Z omp_get_max_threads() : 6 2023-05-06T10:25:17.7799566Z Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2023-05-06T10:25:17.7799921Z mkl_get_max_threads() : 6 2023-05-06T10:25:17.7800364Z Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e) 2023-05-06T10:25:17.7800743Z std::thread::hardware_concurrency() : 12 2023-05-06T10:25:17.7801022Z Environment variables: 2023-05-06T10:25:17.7801261Z OMP_NUM_THREADS : [not set] 2023-05-06T10:25:17.7801484Z MKL_NUM_THREADS : [not set] 2023-05-06T10:25:17.7801752Z ATen parallel backend: OpenMP 2023-05-06T10:25:17.7801905Z 2023-05-06T10:25:18.1135675Z + [[ inductor_torchbench_perf == *backward* ]] 2023-05-06T10:25:18.1136213Z + [[ inductor_torchbench_perf == *xla* ]] 2023-05-06T10:25:18.1136735Z + [[ inductor_torchbench_perf == \j\i\t\_\l\e\g\a\c\y ]] 2023-05-06T10:25:18.1137627Z + [[ linux-bionic-cuda11.8-py3.10-gcc7-sm80 == *libtorch* ]] 2023-05-06T10:25:18.1137952Z + [[ inductor_torchbench_perf == distributed ]] 2023-05-06T10:25:18.1138244Z + [[ inductor_torchbench_perf == deploy ]] 2023-05-06T10:25:18.1138550Z + [[ inductor_torchbench_perf == *inductor_distributed* ]] 2023-05-06T10:25:18.1138845Z + [[ inductor_torchbench_perf == *huggingface* ]] 2023-05-06T10:25:18.1139132Z + [[ inductor_torchbench_perf == *timm* ]] 2023-05-06T10:25:18.1139619Z + [[ inductor_torchbench_perf == *torchbench* ]] 2023-05-06T10:25:18.1139906Z + [[ inductor_torchbench_perf == *cpu_accuracy* ]] 2023-05-06T10:25:18.1140197Z + install_torchaudio cuda 2023-05-06T10:25:18.1141074Z + local commit 2023-05-06T10:25:18.1141743Z ++ get_pinned_commit audio 2023-05-06T10:25:18.1141993Z ++ cat .github/ci_commit_pins/audio.txt 2023-05-06T10:25:18.1160902Z + commit=a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:18.1161180Z + [[ cuda == \c\u\d\a ]] 2023-05-06T10:25:18.1161598Z + TORCH_CUDA_ARCH_LIST='8.0;8.6' 2023-05-06T10:25:18.1162188Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/audio.git@a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:18.1162917Z + pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/audio.git@a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:18.5514695Z Collecting git+https://github.com/pytorch/audio.git@a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:18.5520991Z Cloning https://github.com/pytorch/audio.git (to revision a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602) to /tmp/pip-req-build-21c_ypfs 2023-05-06T10:25:18.5548568Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/audio.git /tmp/pip-req-build-21c_ypfs 2023-05-06T10:25:20.4934213Z Running command git rev-parse -q --verify 'sha^a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602' 2023-05-06T10:25:20.4958930Z Running command git fetch -q https://github.com/pytorch/audio.git a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:21.3523657Z Running command git checkout -q a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:21.9359290Z Resolved https://github.com/pytorch/audio.git to commit a8f4e97bd5356a7a77510cdf6a3a62e25a5dc602 2023-05-06T10:25:21.9360645Z Running command git submodule update --init --recursive -q 2023-05-06T10:25:42.7587230Z Preparing metadata (setup.py) ... [?25l- done 2023-05-06T10:25:42.7622520Z [?25hRequirement already satisfied: torch in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchaudio==2.0.0a0+a8f4e97) (2.1.0a0+gitd719f02) 2023-05-06T10:25:42.7710292Z Requirement already satisfied: typing-extensions in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (4.5.0) 2023-05-06T10:25:42.7714167Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (3.1.2) 2023-05-06T10:25:42.7718891Z Requirement already satisfied: sympy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (1.11.1) 2023-05-06T10:25:42.7724066Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (3.9.0) 2023-05-06T10:25:42.7729154Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (2023.4.0) 2023-05-06T10:25:42.7733127Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchaudio==2.0.0a0+a8f4e97) (2.8.8) 2023-05-06T10:25:42.8195886Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch->torchaudio==2.0.0a0+a8f4e97) (2.1.2) 2023-05-06T10:25:42.8373390Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy->torch->torchaudio==2.0.0a0+a8f4e97) (1.3.0) 2023-05-06T10:25:42.8465705Z Building wheels for collected packages: torchaudio 2023-05-06T10:29:19.4999869Z Building wheel for torchaudio (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done 2023-05-06T10:29:19.5130134Z [?25h Created wheel for torchaudio: filename=torchaudio-2.0.0a0+a8f4e97-cp310-cp310-linux_x86_64.whl size=3812801 sha256=cd4613b780a83aadfa829e4eec3927491bb3885f6458d13415f63c4387204097 2023-05-06T10:29:19.5131318Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/34/3a/3f/9b303b7cc5d4e7d824fe266e4f875e08f15eb13882f11f305c 2023-05-06T10:29:19.5164803Z Successfully built torchaudio 2023-05-06T10:29:20.2289144Z Installing collected packages: torchaudio 2023-05-06T10:29:20.4903723Z Successfully installed torchaudio-2.0.0a0+a8f4e97 2023-05-06T10:29:21.0581566Z + install_torchtext 2023-05-06T10:29:21.0581993Z + local commit 2023-05-06T10:29:21.0588095Z ++ get_pinned_commit text 2023-05-06T10:29:21.0588437Z ++ cat .github/ci_commit_pins/text.txt 2023-05-06T10:29:21.0612243Z + commit=5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:21.0613670Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/text.git@5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:21.0614425Z + pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/text.git@5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:21.4977898Z Collecting git+https://github.com/pytorch/text.git@5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:21.4985392Z Cloning https://github.com/pytorch/text.git (to revision 5b78d074bd303eb230d30567646fcf0358ee2dd4) to /tmp/pip-req-build-mnwgd4d0 2023-05-06T10:29:21.5013384Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/text.git /tmp/pip-req-build-mnwgd4d0 2023-05-06T10:29:23.1990869Z Running command git rev-parse -q --verify 'sha^5b78d074bd303eb230d30567646fcf0358ee2dd4' 2023-05-06T10:29:23.2015481Z Running command git fetch -q https://github.com/pytorch/text.git 5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:23.9991887Z Running command git checkout -q 5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:24.4654658Z Resolved https://github.com/pytorch/text.git to commit 5b78d074bd303eb230d30567646fcf0358ee2dd4 2023-05-06T10:29:24.4656197Z Running command git submodule update --init --recursive -q 2023-05-06T10:29:31.7066839Z Preparing metadata (setup.py) ... [?25l- done 2023-05-06T10:29:31.7130262Z [?25hRequirement already satisfied: tqdm in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchtext==0.14.0a0+5b78d07) (4.65.0) 2023-05-06T10:29:31.7133519Z Requirement already satisfied: requests in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchtext==0.14.0a0+5b78d07) (2.30.0) 2023-05-06T10:29:31.7137060Z Requirement already satisfied: torch in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchtext==0.14.0a0+5b78d07) (2.1.0a0+gitd719f02) 2023-05-06T10:29:31.7140981Z Requirement already satisfied: numpy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchtext==0.14.0a0+5b78d07) (1.21.2) 2023-05-06T10:29:31.7212935Z Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchtext==0.14.0a0+5b78d07) (3.1.0) 2023-05-06T10:29:31.7219242Z Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchtext==0.14.0a0+5b78d07) (1.26.15) 2023-05-06T10:29:31.7227840Z Requirement already satisfied: idna<4,>=2.5 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchtext==0.14.0a0+5b78d07) (3.4) 2023-05-06T10:29:31.7233983Z Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchtext==0.14.0a0+5b78d07) (2022.12.7) 2023-05-06T10:29:31.7295937Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (2023.4.0) 2023-05-06T10:29:31.7299647Z Requirement already satisfied: typing-extensions in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (4.5.0) 2023-05-06T10:29:31.7302811Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (3.1.2) 2023-05-06T10:29:31.7306923Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (3.9.0) 2023-05-06T10:29:31.7310375Z Requirement already satisfied: sympy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (1.11.1) 2023-05-06T10:29:31.7315743Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchtext==0.14.0a0+5b78d07) (2.8.8) 2023-05-06T10:29:31.8026266Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch->torchtext==0.14.0a0+5b78d07) (2.1.2) 2023-05-06T10:29:31.8218859Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy->torch->torchtext==0.14.0a0+5b78d07) (1.3.0) 2023-05-06T10:29:31.8317535Z Building wheels for collected packages: torchtext 2023-05-06T10:30:21.7299011Z Building wheel for torchtext (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - done 2023-05-06T10:30:21.7370358Z [?25h Created wheel for torchtext: filename=torchtext-0.14.0a0+5b78d07-cp310-cp310-linux_x86_64.whl size=2049779 sha256=721743b2c0012fc227bd220ece3402f325675e769341b10f5ca9083be9b3d81b 2023-05-06T10:30:21.7371198Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/32/8c/8c/c80fd49c228ca47afb83b30e482dfd8079c17702efc9669f33 2023-05-06T10:30:21.7403483Z Successfully built torchtext 2023-05-06T10:30:22.4388846Z Installing collected packages: torchtext 2023-05-06T10:30:22.5695033Z Successfully installed torchtext-0.14.0a0+5b78d07 2023-05-06T10:30:22.7099665Z + install_torchvision 2023-05-06T10:30:22.7099953Z + local commit 2023-05-06T10:30:22.7107565Z ++ get_pinned_commit vision 2023-05-06T10:30:22.7108985Z ++ cat .github/ci_commit_pins/vision.txt 2023-05-06T10:30:22.7129351Z + commit=0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:22.7130149Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/vision.git@0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:22.7130928Z + pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/vision.git@0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:23.1542889Z Collecting git+https://github.com/pytorch/vision.git@0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:23.1550601Z Cloning https://github.com/pytorch/vision.git (to revision 0370134359268666bb1de9b53c1e9e64f1d4cde7) to /tmp/pip-req-build-_4cl6st0 2023-05-06T10:30:23.1576898Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/vision.git /tmp/pip-req-build-_4cl6st0 2023-05-06T10:30:25.8570941Z Running command git rev-parse -q --verify 'sha^0370134359268666bb1de9b53c1e9e64f1d4cde7' 2023-05-06T10:30:25.8595145Z Running command git fetch -q https://github.com/pytorch/vision.git 0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:27.3373798Z Resolved https://github.com/pytorch/vision.git to commit 0370134359268666bb1de9b53c1e9e64f1d4cde7 2023-05-06T10:30:29.9824907Z Preparing metadata (setup.py) ... [?25l- done 2023-05-06T10:30:29.9898734Z [?25hRequirement already satisfied: numpy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.16.0a0+370134) (1.21.2) 2023-05-06T10:30:29.9903641Z Requirement already satisfied: requests in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.16.0a0+370134) (2.30.0) 2023-05-06T10:30:29.9909643Z Requirement already satisfied: torch in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.16.0a0+370134) (2.1.0a0+gitd719f02) 2023-05-06T10:30:29.9918597Z Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.16.0a0+370134) (9.5.0) 2023-05-06T10:30:30.0112295Z Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchvision==0.16.0a0+370134) (1.26.15) 2023-05-06T10:30:30.0121220Z Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchvision==0.16.0a0+370134) (3.1.0) 2023-05-06T10:30:30.0131279Z Requirement already satisfied: idna<4,>=2.5 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchvision==0.16.0a0+370134) (3.4) 2023-05-06T10:30:30.0140299Z Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from requests->torchvision==0.16.0a0+370134) (2022.12.7) 2023-05-06T10:30:30.0207658Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (3.9.0) 2023-05-06T10:30:30.0212400Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (2.8.8) 2023-05-06T10:30:30.0218537Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (2023.4.0) 2023-05-06T10:30:30.0223664Z Requirement already satisfied: sympy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (1.11.1) 2023-05-06T10:30:30.0229341Z Requirement already satisfied: typing-extensions in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (4.5.0) 2023-05-06T10:30:30.0234939Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.16.0a0+370134) (3.1.2) 2023-05-06T10:30:30.0877728Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch->torchvision==0.16.0a0+370134) (2.1.2) 2023-05-06T10:30:30.1071300Z Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy->torch->torchvision==0.16.0a0+370134) (1.3.0) 2023-05-06T10:30:30.1175124Z Building wheels for collected packages: torchvision 2023-05-06T10:32:06.6618541Z Building wheel for torchvision (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ done 2023-05-06T10:32:06.6687137Z [?25h Created wheel for torchvision: filename=torchvision-0.16.0a0+370134-cp310-cp310-linux_x86_64.whl size=1919927 sha256=c866768b3c0e6432ec6f6f3d16c754636c501b2b0140ada6aada2fe49632467e 2023-05-06T10:32:06.6736750Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/39/f4/0a/dedf074a191b1e9abac8129f302a3ffb302204ec20d40208f1 2023-05-06T10:32:06.6737161Z Successfully built torchvision 2023-05-06T10:32:07.3696414Z Installing collected packages: torchvision 2023-05-06T10:32:07.8762378Z Successfully installed torchvision-0.16.0a0+370134 2023-05-06T10:32:08.0063639Z + id=1 2023-05-06T10:32:08.0064315Z + [[ inductor_torchbench_perf == *inductor_torchbench_smoketest_perf* ]] 2023-05-06T10:32:08.0064854Z + checkout_install_torchbench 2023-05-06T10:32:08.0065088Z + local commit 2023-05-06T10:32:08.0074873Z ++ get_pinned_commit torchbench 2023-05-06T10:32:08.0075222Z ++ cat .github/ci_commit_pins/torchbench.txt 2023-05-06T10:32:08.0095317Z + commit=a0848e19bad26ed92810b56616e93dbec0eeaa24 2023-05-06T10:32:08.0095747Z + git clone https://github.com/pytorch/benchmark torchbench 2023-05-06T10:32:08.0114601Z Cloning into 'torchbench'... 2023-05-06T10:32:08.4253193Z remote: Enumerating objects: 22078, done. 2023-05-06T10:32:08.4253621Z remote: Counting objects: 0% (1/894) 2023-05-06T10:32:08.4253946Z remote: Counting objects: 1% (9/894) 2023-05-06T10:32:08.4255170Z remote: Counting objects: 2% (18/894) 2023-05-06T10:32:08.4255758Z remote: Counting objects: 3% (27/894) 2023-05-06T10:32:08.4256357Z remote: Counting objects: 4% (36/894) 2023-05-06T10:32:08.4258939Z remote: Counting objects: 5% (45/894) 2023-05-06T10:32:08.4259764Z remote: Counting objects: 6% (54/894) 2023-05-06T10:32:08.4260352Z remote: Counting objects: 7% (63/894) 2023-05-06T10:32:08.4260691Z remote: Counting objects: 8% (72/894) 2023-05-06T10:32:08.4261023Z remote: Counting objects: 9% (81/894) 2023-05-06T10:32:08.4261336Z remote: Counting objects: 10% (90/894) 2023-05-06T10:32:08.4261664Z remote: Counting objects: 11% (99/894) 2023-05-06T10:32:08.4261998Z remote: Counting objects: 12% (108/894) 2023-05-06T10:32:08.4262320Z remote: Counting objects: 13% (117/894) 2023-05-06T10:32:08.4263007Z remote: Counting objects: 14% (126/894) 2023-05-06T10:32:08.4263353Z remote: Counting objects: 15% (135/894) 2023-05-06T10:32:08.4263669Z remote: Counting objects: 16% (144/894) 2023-05-06T10:32:08.4264017Z remote: Counting objects: 17% (152/894) 2023-05-06T10:32:08.4264349Z remote: Counting objects: 18% (161/894) 2023-05-06T10:32:08.4264661Z remote: Counting objects: 19% (170/894) 2023-05-06T10:32:08.4264986Z remote: Counting objects: 20% (179/894) 2023-05-06T10:32:08.4265316Z remote: Counting objects: 21% (188/894) 2023-05-06T10:32:08.4265737Z remote: Counting objects: 22% (197/894) 2023-05-06T10:32:08.4266051Z remote: Counting objects: 23% (206/894) 2023-05-06T10:32:08.4266385Z remote: Counting objects: 24% (215/894) 2023-05-06T10:32:08.4266714Z remote: Counting objects: 25% (224/894) 2023-05-06T10:32:08.4267027Z remote: Counting objects: 26% (233/894) 2023-05-06T10:32:08.4267357Z remote: Counting objects: 27% (242/894) 2023-05-06T10:32:08.4267688Z remote: Counting objects: 28% (251/894) 2023-05-06T10:32:08.4267998Z remote: Counting objects: 29% (260/894) 2023-05-06T10:32:08.4268327Z remote: Counting objects: 30% (269/894) 2023-05-06T10:32:08.4268661Z remote: Counting objects: 31% (278/894) 2023-05-06T10:32:08.4268968Z remote: Counting objects: 32% (287/894) 2023-05-06T10:32:08.4269295Z remote: Counting objects: 33% (296/894) 2023-05-06T10:32:08.4269624Z remote: Counting objects: 34% (304/894) 2023-05-06T10:32:08.4269937Z remote: Counting objects: 35% (313/894) 2023-05-06T10:32:08.4270261Z remote: Counting objects: 36% (322/894) 2023-05-06T10:32:08.4270588Z remote: Counting objects: 37% (331/894) 2023-05-06T10:32:08.4270914Z remote: Counting objects: 38% (340/894) 2023-05-06T10:32:08.4271225Z remote: Counting objects: 39% (349/894) 2023-05-06T10:32:08.4271552Z remote: Counting objects: 40% (358/894) 2023-05-06T10:32:08.4271880Z remote: Counting objects: 41% (367/894) 2023-05-06T10:32:08.4272199Z remote: Counting objects: 42% (376/894) 2023-05-06T10:32:08.4272524Z remote: Counting objects: 43% (385/894) 2023-05-06T10:32:08.4272852Z remote: Counting objects: 44% (394/894) 2023-05-06T10:32:08.4273170Z remote: Counting objects: 45% (403/894) 2023-05-06T10:32:08.4273498Z remote: Counting objects: 46% (412/894) 2023-05-06T10:32:08.4273822Z remote: Counting objects: 47% (421/894) 2023-05-06T10:32:08.4274133Z remote: Counting objects: 48% (430/894) 2023-05-06T10:32:08.4274461Z remote: Counting objects: 49% (439/894) 2023-05-06T10:32:08.4274783Z remote: Counting objects: 50% (447/894) 2023-05-06T10:32:08.4275088Z remote: Counting objects: 51% (456/894) 2023-05-06T10:32:08.4275413Z remote: Counting objects: 52% (465/894) 2023-05-06T10:32:08.4275788Z remote: Counting objects: 53% (474/894) 2023-05-06T10:32:08.4276098Z remote: Counting objects: 54% (483/894) 2023-05-06T10:32:08.4299789Z remote: Counting objects: 55% (492/894) 2023-05-06T10:32:08.4300371Z remote: Counting objects: 56% (501/894) 2023-05-06T10:32:08.4300843Z remote: Counting objects: 57% (510/894) 2023-05-06T10:32:08.4301495Z remote: Counting objects: 58% (519/894) 2023-05-06T10:32:08.4302229Z remote: Counting objects: 59% (528/894) 2023-05-06T10:32:08.4302557Z remote: Counting objects: 60% (537/894) 2023-05-06T10:32:08.4302887Z remote: Counting objects: 61% (546/894) 2023-05-06T10:32:08.4303205Z remote: Counting objects: 62% (555/894) 2023-05-06T10:32:08.4303532Z remote: Counting objects: 63% (564/894) 2023-05-06T10:32:08.4303862Z remote: Counting objects: 64% (573/894) 2023-05-06T10:32:08.4304190Z remote: Counting objects: 65% (582/894) 2023-05-06T10:32:08.4304501Z remote: Counting objects: 66% (591/894) 2023-05-06T10:32:08.4304830Z remote: Counting objects: 67% (599/894) 2023-05-06T10:32:08.4305158Z remote: Counting objects: 68% (608/894) 2023-05-06T10:32:08.4305469Z remote: Counting objects: 69% (617/894) 2023-05-06T10:32:08.4306026Z remote: Counting objects: 70% (626/894) 2023-05-06T10:32:08.4306367Z remote: Counting objects: 71% (635/894) 2023-05-06T10:32:08.4306677Z remote: Counting objects: 72% (644/894) 2023-05-06T10:32:08.4307006Z remote: Counting objects: 73% (653/894) 2023-05-06T10:32:08.4307335Z remote: Counting objects: 74% (662/894) 2023-05-06T10:32:08.4307643Z remote: Counting objects: 75% (671/894) 2023-05-06T10:32:08.4307970Z remote: Counting objects: 76% (680/894) 2023-05-06T10:32:08.4308294Z remote: Counting objects: 77% (689/894) 2023-05-06T10:32:08.4308606Z remote: Counting objects: 78% (698/894) 2023-05-06T10:32:08.4308931Z remote: Counting objects: 79% (707/894) 2023-05-06T10:32:08.4309251Z remote: Counting objects: 80% (716/894) 2023-05-06T10:32:08.4309562Z remote: Counting objects: 81% (725/894) 2023-05-06T10:32:08.4309883Z remote: Counting objects: 82% (734/894) 2023-05-06T10:32:08.4310208Z remote: Counting objects: 83% (743/894) 2023-05-06T10:32:08.4310535Z remote: Counting objects: 84% (751/894) 2023-05-06T10:32:08.4310846Z remote: Counting objects: 85% (760/894) 2023-05-06T10:32:08.4311171Z remote: Counting objects: 86% (769/894) 2023-05-06T10:32:08.4311499Z remote: Counting objects: 87% (778/894) 2023-05-06T10:32:08.4311812Z remote: Counting objects: 88% (787/894) 2023-05-06T10:32:08.4312141Z remote: Counting objects: 89% (796/894) 2023-05-06T10:32:08.4312463Z remote: Counting objects: 90% (805/894) 2023-05-06T10:32:08.4312774Z remote: Counting objects: 91% (814/894) 2023-05-06T10:32:08.4313099Z remote: Counting objects: 92% (823/894) 2023-05-06T10:32:08.4313422Z remote: Counting objects: 93% (832/894) 2023-05-06T10:32:08.4313734Z remote: Counting objects: 94% (841/894) 2023-05-06T10:32:08.4314057Z remote: Counting objects: 95% (850/894) 2023-05-06T10:32:08.4314381Z remote: Counting objects: 96% (859/894) 2023-05-06T10:32:08.4314693Z remote: Counting objects: 97% (868/894) 2023-05-06T10:32:08.4315017Z remote: Counting objects: 98% (877/894) 2023-05-06T10:32:08.4315341Z remote: Counting objects: 99% (886/894) 2023-05-06T10:32:08.4315709Z remote: Counting objects: 100% (894/894) 2023-05-06T10:32:08.4316046Z remote: Counting objects: 100% (894/894), done. 2023-05-06T10:32:08.4316433Z remote: Compressing objects: 0% (1/458) 2023-05-06T10:32:08.4317085Z remote: Compressing objects: 1% (5/458) 2023-05-06T10:32:08.4317499Z remote: Compressing objects: 2% (10/458) 2023-05-06T10:32:08.4338215Z remote: Compressing objects: 3% (14/458) 2023-05-06T10:32:08.4338937Z remote: Compressing objects: 4% (19/458) 2023-05-06T10:32:08.4380078Z remote: Compressing objects: 5% (23/458) 2023-05-06T10:32:08.4428217Z remote: Compressing objects: 6% (28/458) 2023-05-06T10:32:08.4428929Z remote: Compressing objects: 7% (33/458) 2023-05-06T10:32:08.4429372Z remote: Compressing objects: 8% (37/458) 2023-05-06T10:32:08.4437908Z remote: Compressing objects: 9% (42/458) 2023-05-06T10:32:08.4444653Z remote: Compressing objects: 10% (46/458) 2023-05-06T10:32:08.4457730Z remote: Compressing objects: 11% (51/458) 2023-05-06T10:32:08.4468031Z remote: Compressing objects: 12% (55/458) 2023-05-06T10:32:08.4484962Z remote: Compressing objects: 13% (60/458) 2023-05-06T10:32:08.4489110Z remote: Compressing objects: 14% (65/458) 2023-05-06T10:32:08.4502030Z remote: Compressing objects: 15% (69/458) 2023-05-06T10:32:08.4504664Z remote: Compressing objects: 16% (74/458) 2023-05-06T10:32:08.4508560Z remote: Compressing objects: 17% (78/458) 2023-05-06T10:32:08.4512703Z remote: Compressing objects: 18% (83/458) 2023-05-06T10:32:08.4516066Z remote: Compressing objects: 19% (88/458) 2023-05-06T10:32:08.4519002Z remote: Compressing objects: 20% (92/458) 2023-05-06T10:32:08.4520024Z remote: Compressing objects: 21% (97/458) 2023-05-06T10:32:08.4521598Z remote: Compressing objects: 22% (101/458) 2023-05-06T10:32:08.4523724Z remote: Compressing objects: 23% (106/458) 2023-05-06T10:32:08.4524395Z remote: Compressing objects: 24% (110/458) 2023-05-06T10:32:08.4524853Z remote: Compressing objects: 25% (115/458) 2023-05-06T10:32:08.4525878Z remote: Compressing objects: 26% (120/458) 2023-05-06T10:32:08.4526501Z remote: Compressing objects: 27% (124/458) 2023-05-06T10:32:08.4527172Z remote: Compressing objects: 28% (129/458) 2023-05-06T10:32:08.4527686Z remote: Compressing objects: 29% (133/458) 2023-05-06T10:32:08.4528047Z remote: Compressing objects: 30% (138/458) 2023-05-06T10:32:08.4528650Z remote: Compressing objects: 31% (142/458) 2023-05-06T10:32:08.4529321Z remote: Compressing objects: 32% (147/458) 2023-05-06T10:32:08.4529937Z remote: Compressing objects: 33% (152/458) 2023-05-06T10:32:08.4530641Z remote: Compressing objects: 34% (156/458) 2023-05-06T10:32:08.4531511Z remote: Compressing objects: 35% (161/458) 2023-05-06T10:32:08.4534457Z remote: Compressing objects: 36% (165/458) 2023-05-06T10:32:08.4535385Z remote: Compressing objects: 37% (170/458) 2023-05-06T10:32:08.4536142Z remote: Compressing objects: 38% (175/458) 2023-05-06T10:32:08.4538732Z remote: Compressing objects: 39% (179/458) 2023-05-06T10:32:08.4544642Z remote: Compressing objects: 40% (184/458) 2023-05-06T10:32:08.4545279Z remote: Compressing objects: 41% (188/458) 2023-05-06T10:32:08.4546754Z remote: Compressing objects: 42% (193/458) 2023-05-06T10:32:08.4547426Z remote: Compressing objects: 43% (197/458) 2023-05-06T10:32:08.4548014Z remote: Compressing objects: 44% (202/458) 2023-05-06T10:32:08.4548344Z remote: Compressing objects: 45% (207/458) 2023-05-06T10:32:08.4548729Z remote: Compressing objects: 46% (211/458) 2023-05-06T10:32:08.4549075Z remote: Compressing objects: 47% (216/458) 2023-05-06T10:32:08.4549416Z remote: Compressing objects: 48% (220/458) 2023-05-06T10:32:08.4549922Z remote: Compressing objects: 49% (225/458) 2023-05-06T10:32:08.4550462Z remote: Compressing objects: 50% (229/458) 2023-05-06T10:32:08.4550810Z remote: Compressing objects: 51% (234/458) 2023-05-06T10:32:08.4551144Z remote: Compressing objects: 52% (239/458) 2023-05-06T10:32:08.4551495Z remote: Compressing objects: 53% (243/458) 2023-05-06T10:32:08.4551843Z remote: Compressing objects: 54% (248/458) 2023-05-06T10:32:08.4552428Z remote: Compressing objects: 55% (252/458) 2023-05-06T10:32:08.4552837Z remote: Compressing objects: 56% (257/458) 2023-05-06T10:32:08.4553175Z remote: Compressing objects: 57% (262/458) 2023-05-06T10:32:08.4553529Z remote: Compressing objects: 58% (266/458) 2023-05-06T10:32:08.4553858Z remote: Compressing objects: 59% (271/458) 2023-05-06T10:32:08.4554194Z remote: Compressing objects: 60% (275/458) 2023-05-06T10:32:08.4554531Z remote: Compressing objects: 61% (280/458) 2023-05-06T10:32:08.4554964Z remote: Compressing objects: 62% (284/458) 2023-05-06T10:32:08.4555342Z remote: Compressing objects: 63% (289/458) 2023-05-06T10:32:08.4555932Z remote: Compressing objects: 64% (294/458) 2023-05-06T10:32:08.4556262Z remote: Compressing objects: 65% (298/458) 2023-05-06T10:32:08.4557697Z remote: Compressing objects: 66% (303/458) 2023-05-06T10:32:08.4565034Z remote: Compressing objects: 67% (307/458) 2023-05-06T10:32:08.4581265Z remote: Compressing objects: 68% (312/458) 2023-05-06T10:32:08.4581873Z remote: Compressing objects: 69% (317/458) 2023-05-06T10:32:08.4582415Z remote: Compressing objects: 70% (321/458) 2023-05-06T10:32:08.4583015Z remote: Compressing objects: 71% (326/458) 2023-05-06T10:32:08.4583723Z remote: Compressing objects: 72% (330/458) 2023-05-06T10:32:08.4584297Z remote: Compressing objects: 73% (335/458) 2023-05-06T10:32:08.4585068Z remote: Compressing objects: 74% (339/458) 2023-05-06T10:32:08.4585592Z remote: Compressing objects: 75% (344/458) 2023-05-06T10:32:08.4586111Z remote: Compressing objects: 76% (349/458) 2023-05-06T10:32:08.4587184Z remote: Compressing objects: 77% (353/458) 2023-05-06T10:32:08.4587907Z remote: Compressing objects: 78% (358/458) 2023-05-06T10:32:08.4588610Z remote: Compressing objects: 79% (362/458) 2023-05-06T10:32:08.4589259Z remote: Compressing objects: 80% (367/458) 2023-05-06T10:32:08.4589927Z remote: Compressing objects: 81% (371/458) 2023-05-06T10:32:08.4590486Z remote: Compressing objects: 82% (376/458) 2023-05-06T10:32:08.4591462Z remote: Compressing objects: 83% (381/458) 2023-05-06T10:32:08.4594309Z remote: Compressing objects: 84% (385/458) 2023-05-06T10:32:08.4595107Z remote: Compressing objects: 85% (390/458) 2023-05-06T10:32:08.4595669Z remote: Compressing objects: 86% (394/458) 2023-05-06T10:32:08.4595999Z remote: Compressing objects: 87% (399/458) 2023-05-06T10:32:08.4597213Z remote: Compressing objects: 88% (404/458) 2023-05-06T10:32:08.4597899Z remote: Compressing objects: 89% (408/458) 2023-05-06T10:32:08.4598475Z remote: Compressing objects: 90% (413/458) 2023-05-06T10:32:08.4599288Z remote: Compressing objects: 91% (417/458) 2023-05-06T10:32:08.4599976Z remote: Compressing objects: 92% (422/458) 2023-05-06T10:32:08.4600562Z remote: Compressing objects: 93% (426/458) 2023-05-06T10:32:08.4601455Z remote: Compressing objects: 94% (431/458) 2023-05-06T10:32:08.4602163Z remote: Compressing objects: 95% (436/458) 2023-05-06T10:32:08.4602753Z remote: Compressing objects: 96% (440/458) 2023-05-06T10:32:08.4603094Z remote: Compressing objects: 97% (445/458) 2023-05-06T10:32:08.4604611Z remote: Compressing objects: 98% (449/458) 2023-05-06T10:32:08.4605263Z remote: Compressing objects: 99% (454/458) 2023-05-06T10:32:08.4607194Z remote: Compressing objects: 100% (458/458) 2023-05-06T10:32:08.4607853Z remote: Compressing objects: 100% (458/458), done. 2023-05-06T10:32:08.4757985Z Receiving objects: 0% (1/22078) 2023-05-06T10:32:08.4870082Z Receiving objects: 1% (221/22078) 2023-05-06T10:32:08.4900376Z Receiving objects: 2% (442/22078) 2023-05-06T10:32:08.5001800Z Receiving objects: 3% (663/22078) 2023-05-06T10:32:08.5099141Z Receiving objects: 4% (884/22078) 2023-05-06T10:32:08.5135738Z Receiving objects: 5% (1104/22078) 2023-05-06T10:32:08.5164832Z Receiving objects: 6% (1325/22078) 2023-05-06T10:32:08.5193607Z Receiving objects: 7% (1546/22078) 2023-05-06T10:32:08.5247346Z Receiving objects: 8% (1767/22078) 2023-05-06T10:32:08.5323643Z Receiving objects: 9% (1988/22078) 2023-05-06T10:32:08.5389174Z Receiving objects: 10% (2208/22078) 2023-05-06T10:32:08.5454025Z Receiving objects: 11% (2429/22078) 2023-05-06T10:32:08.5502556Z Receiving objects: 12% (2650/22078) 2023-05-06T10:32:08.5644564Z Receiving objects: 13% (2871/22078) 2023-05-06T10:32:08.5762981Z Receiving objects: 14% (3091/22078) 2023-05-06T10:32:08.5865378Z Receiving objects: 15% (3312/22078) 2023-05-06T10:32:08.5927374Z Receiving objects: 16% (3533/22078) 2023-05-06T10:32:08.5945102Z Receiving objects: 17% (3754/22078) 2023-05-06T10:32:08.5962933Z Receiving objects: 18% (3975/22078) 2023-05-06T10:32:08.6113597Z Receiving objects: 19% (4195/22078) 2023-05-06T10:32:08.6114232Z Receiving objects: 20% (4416/22078) 2023-05-06T10:32:08.6114467Z Receiving objects: 21% (4637/22078) 2023-05-06T10:32:08.6247156Z Receiving objects: 22% (4858/22078) 2023-05-06T10:32:08.6279059Z Receiving objects: 23% (5078/22078) 2023-05-06T10:32:08.6300637Z Receiving objects: 24% (5299/22078) 2023-05-06T10:32:08.6347482Z Receiving objects: 25% (5520/22078) 2023-05-06T10:32:08.9734733Z Receiving objects: 26% (5741/22078) 2023-05-06T10:32:09.4979178Z Receiving objects: 27% (5962/22078), 30.59 MiB | 61.16 MiB/s 2023-05-06T10:32:09.8275534Z Receiving objects: 27% (6029/22078), 64.93 MiB | 64.92 MiB/s 2023-05-06T10:32:10.3686478Z Receiving objects: 28% (6182/22078), 64.93 MiB | 64.92 MiB/s 2023-05-06T10:32:10.4256235Z Receiving objects: 29% (6403/22078), 100.81 MiB | 67.21 MiB/s 2023-05-06T10:32:10.4632660Z Receiving objects: 30% (6624/22078), 100.81 MiB | 67.21 MiB/s 2023-05-06T10:32:10.4823335Z Receiving objects: 30% (6768/22078), 100.81 MiB | 67.21 MiB/s 2023-05-06T10:32:10.5401942Z Receiving objects: 31% (6845/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.5968515Z Receiving objects: 32% (7065/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.6617641Z Receiving objects: 33% (7286/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.7188019Z Receiving objects: 34% (7507/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.7749658Z Receiving objects: 35% (7728/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.8318032Z Receiving objects: 36% (7949/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.8889858Z Receiving objects: 37% (8169/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.9460291Z Receiving objects: 38% (8390/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:10.9995387Z Receiving objects: 39% (8611/22078), 131.23 MiB | 65.61 MiB/s 2023-05-06T10:32:11.2300406Z Receiving objects: 40% (8832/22078), 167.20 MiB | 66.88 MiB/s 2023-05-06T10:32:11.2964211Z Receiving objects: 41% (9052/22078), 167.20 MiB | 66.88 MiB/s 2023-05-06T10:32:11.3077153Z Receiving objects: 42% (9273/22078), 167.20 MiB | 66.88 MiB/s 2023-05-06T10:32:11.4635708Z Receiving objects: 43% (9494/22078), 167.20 MiB | 66.88 MiB/s 2023-05-06T10:32:11.5277968Z Receiving objects: 43% (9585/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.8163698Z Receiving objects: 44% (9715/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.8701942Z Receiving objects: 45% (9936/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9008226Z Receiving objects: 46% (10156/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9071474Z Receiving objects: 47% (10377/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9228907Z Receiving objects: 48% (10598/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9236476Z Receiving objects: 49% (10819/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9244052Z Receiving objects: 50% (11039/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9248892Z Receiving objects: 51% (11260/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9254902Z Receiving objects: 52% (11481/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9260995Z Receiving objects: 53% (11702/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9270265Z Receiving objects: 54% (11923/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9280440Z Receiving objects: 55% (12143/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9287631Z Receiving objects: 56% (12364/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9294427Z Receiving objects: 57% (12585/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9303269Z Receiving objects: 58% (12806/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9314284Z Receiving objects: 59% (13027/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9800360Z Receiving objects: 60% (13247/22078), 191.36 MiB | 63.78 MiB/s 2023-05-06T10:32:11.9806696Z Receiving objects: 61% (13468/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9815026Z Receiving objects: 62% (13689/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9827007Z Receiving objects: 63% (13910/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9837503Z Receiving objects: 64% (14130/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9849897Z Receiving objects: 65% (14351/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9862800Z Receiving objects: 66% (14572/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9870205Z Receiving objects: 67% (14793/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9878083Z Receiving objects: 68% (15014/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9899386Z Receiving objects: 69% (15234/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9914629Z Receiving objects: 70% (15455/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9922150Z Receiving objects: 71% (15676/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9930334Z Receiving objects: 72% (15897/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:11.9937274Z Receiving objects: 73% (16117/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2263791Z Receiving objects: 74% (16338/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2518666Z Receiving objects: 75% (16559/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2529069Z Receiving objects: 76% (16780/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2535842Z Receiving objects: 77% (17001/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2544965Z Receiving objects: 78% (17221/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2557811Z Receiving objects: 79% (17442/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2570098Z Receiving objects: 80% (17663/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2576550Z Receiving objects: 81% (17884/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2727379Z Receiving objects: 82% (18104/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2732336Z Receiving objects: 83% (18325/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2738220Z Receiving objects: 84% (18546/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2745875Z Receiving objects: 85% (18767/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2751521Z Receiving objects: 86% (18988/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2758434Z Receiving objects: 87% (19208/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2763376Z Receiving objects: 88% (19429/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2768757Z Receiving objects: 89% (19650/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2776695Z Receiving objects: 90% (19871/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2916516Z Receiving objects: 91% (20091/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2928321Z Receiving objects: 92% (20312/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2937678Z Receiving objects: 93% (20533/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2954945Z Receiving objects: 94% (20754/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2970498Z Receiving objects: 95% (20975/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.2996774Z Receiving objects: 96% (21195/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.3002878Z Receiving objects: 97% (21416/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.3042493Z Receiving objects: 98% (21637/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.3045868Z Receiving objects: 99% (21858/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.3046626Z remote: Total 22078 (delta 511), reused 769 (delta 434), pack-reused 21184 2023-05-06T10:32:12.3069206Z Receiving objects: 100% (22078/22078), 225.52 MiB | 64.43 MiB/s 2023-05-06T10:32:12.3069564Z Receiving objects: 100% (22078/22078), 242.57 MiB | 63.11 MiB/s, done. 2023-05-06T10:32:12.3098379Z Resolving deltas: 0% (0/10818) 2023-05-06T10:32:12.3112121Z Resolving deltas: 1% (109/10818) 2023-05-06T10:32:12.3116440Z Resolving deltas: 2% (285/10818) 2023-05-06T10:32:12.3137169Z Resolving deltas: 3% (331/10818) 2023-05-06T10:32:12.3152939Z Resolving deltas: 4% (433/10818) 2023-05-06T10:32:12.3166681Z Resolving deltas: 5% (546/10818) 2023-05-06T10:32:12.3179508Z Resolving deltas: 6% (693/10818) 2023-05-06T10:32:12.3193673Z Resolving deltas: 7% (766/10818) 2023-05-06T10:32:12.3203451Z Resolving deltas: 8% (874/10818) 2023-05-06T10:32:12.3214488Z Resolving deltas: 9% (974/10818) 2023-05-06T10:32:12.3226071Z Resolving deltas: 10% (1105/10818) 2023-05-06T10:32:12.3233387Z Resolving deltas: 11% (1215/10818) 2023-05-06T10:32:12.3243160Z Resolving deltas: 12% (1302/10818) 2023-05-06T10:32:12.3254415Z Resolving deltas: 13% (1407/10818) 2023-05-06T10:32:12.3263867Z Resolving deltas: 14% (1518/10818) 2023-05-06T10:32:12.3271883Z Resolving deltas: 15% (1631/10818) 2023-05-06T10:32:12.3282882Z Resolving deltas: 16% (1737/10818) 2023-05-06T10:32:12.3292460Z Resolving deltas: 17% (1842/10818) 2023-05-06T10:32:12.3303178Z Resolving deltas: 18% (1948/10818) 2023-05-06T10:32:12.3311405Z Resolving deltas: 19% (2058/10818) 2023-05-06T10:32:12.3319619Z Resolving deltas: 20% (2171/10818) 2023-05-06T10:32:12.3337033Z Resolving deltas: 21% (2273/10818) 2023-05-06T10:32:12.3352153Z Resolving deltas: 22% (2380/10818) 2023-05-06T10:32:12.3364115Z Resolving deltas: 23% (2493/10818) 2023-05-06T10:32:12.3373619Z Resolving deltas: 24% (2601/10818) 2023-05-06T10:32:12.3383646Z Resolving deltas: 25% (2708/10818) 2023-05-06T10:32:12.3391851Z Resolving deltas: 26% (2821/10818) 2023-05-06T10:32:12.3397155Z Resolving deltas: 27% (2951/10818) 2023-05-06T10:32:12.3411484Z Resolving deltas: 28% (3032/10818) 2023-05-06T10:32:12.3417930Z Resolving deltas: 30% (3271/10818) 2023-05-06T10:32:12.3426880Z Resolving deltas: 31% (3355/10818) 2023-05-06T10:32:12.3453043Z Resolving deltas: 32% (3465/10818) 2023-05-06T10:32:12.3457230Z Resolving deltas: 34% (3733/10818) 2023-05-06T10:32:12.3464905Z Resolving deltas: 35% (3800/10818) 2023-05-06T10:32:12.3477757Z Resolving deltas: 36% (3895/10818) 2023-05-06T10:32:12.3487451Z Resolving deltas: 37% (4003/10818) 2023-05-06T10:32:12.3495922Z Resolving deltas: 38% (4111/10818) 2023-05-06T10:32:12.3502802Z Resolving deltas: 39% (4224/10818) 2023-05-06T10:32:12.3511413Z Resolving deltas: 40% (4333/10818) 2023-05-06T10:32:12.3527984Z Resolving deltas: 41% (4442/10818) 2023-05-06T10:32:12.3530715Z Resolving deltas: 42% (4612/10818) 2023-05-06T10:32:12.3540123Z Resolving deltas: 43% (4655/10818) 2023-05-06T10:32:12.3546471Z Resolving deltas: 44% (4780/10818) 2023-05-06T10:32:12.3554103Z Resolving deltas: 45% (4876/10818) 2023-05-06T10:32:12.3573429Z Resolving deltas: 46% (4984/10818) 2023-05-06T10:32:12.3586989Z Resolving deltas: 47% (5086/10818) 2023-05-06T10:32:12.3596103Z Resolving deltas: 48% (5194/10818) 2023-05-06T10:32:12.3607041Z Resolving deltas: 49% (5307/10818) 2023-05-06T10:32:12.3612472Z Resolving deltas: 50% (5438/10818) 2023-05-06T10:32:12.3626564Z Resolving deltas: 51% (5527/10818) 2023-05-06T10:32:12.3645018Z Resolving deltas: 52% (5631/10818) 2023-05-06T10:32:12.3651444Z Resolving deltas: 53% (5735/10818) 2023-05-06T10:32:12.3662022Z Resolving deltas: 54% (5843/10818) 2023-05-06T10:32:12.3672023Z Resolving deltas: 55% (5958/10818) 2023-05-06T10:32:12.3678637Z Resolving deltas: 56% (6062/10818) 2023-05-06T10:32:12.3686487Z Resolving deltas: 57% (6172/10818) 2023-05-06T10:32:12.3694493Z Resolving deltas: 58% (6284/10818) 2023-05-06T10:32:12.3699313Z Resolving deltas: 59% (6391/10818) 2023-05-06T10:32:12.3708272Z Resolving deltas: 60% (6497/10818) 2023-05-06T10:32:12.3715553Z Resolving deltas: 61% (6606/10818) 2023-05-06T10:32:12.3723012Z Resolving deltas: 62% (6716/10818) 2023-05-06T10:32:12.3729864Z Resolving deltas: 63% (6821/10818) 2023-05-06T10:32:12.3739397Z Resolving deltas: 64% (6935/10818) 2023-05-06T10:32:12.3743738Z Resolving deltas: 65% (7077/10818) 2023-05-06T10:32:12.3751377Z Resolving deltas: 66% (7142/10818) 2023-05-06T10:32:12.3759109Z Resolving deltas: 67% (7250/10818) 2023-05-06T10:32:12.3766667Z Resolving deltas: 68% (7366/10818) 2023-05-06T10:32:12.3777484Z Resolving deltas: 69% (7469/10818) 2023-05-06T10:32:12.3780928Z Resolving deltas: 70% (7629/10818) 2023-05-06T10:32:12.3790586Z Resolving deltas: 71% (7682/10818) 2023-05-06T10:32:12.3794544Z Resolving deltas: 72% (7837/10818) 2023-05-06T10:32:12.3801476Z Resolving deltas: 73% (7900/10818) 2023-05-06T10:32:12.3810197Z Resolving deltas: 74% (8006/10818) 2023-05-06T10:32:12.3817951Z Resolving deltas: 75% (8115/10818) 2023-05-06T10:32:12.3827155Z Resolving deltas: 76% (8239/10818) 2023-05-06T10:32:12.3832053Z Resolving deltas: 77% (8372/10818) 2023-05-06T10:32:12.3839711Z Resolving deltas: 78% (8445/10818) 2023-05-06T10:32:12.3846294Z Resolving deltas: 79% (8551/10818) 2023-05-06T10:32:12.3856816Z Resolving deltas: 80% (8659/10818) 2023-05-06T10:32:12.3866892Z Resolving deltas: 81% (8805/10818) 2023-05-06T10:32:12.3881188Z Resolving deltas: 82% (8875/10818) 2023-05-06T10:32:12.3882293Z Resolving deltas: 83% (9069/10818) 2023-05-06T10:32:12.3890213Z Resolving deltas: 84% (9089/10818) 2023-05-06T10:32:12.3897016Z Resolving deltas: 85% (9196/10818) 2023-05-06T10:32:12.3908207Z Resolving deltas: 86% (9305/10818) 2023-05-06T10:32:12.3910721Z Resolving deltas: 87% (9485/10818) 2023-05-06T10:32:12.3919694Z Resolving deltas: 88% (9524/10818) 2023-05-06T10:32:12.3926406Z Resolving deltas: 89% (9646/10818) 2023-05-06T10:32:12.3933026Z Resolving deltas: 90% (9741/10818) 2023-05-06T10:32:12.3939972Z Resolving deltas: 91% (9846/10818) 2023-05-06T10:32:12.3947269Z Resolving deltas: 92% (9953/10818) 2023-05-06T10:32:12.3953044Z Resolving deltas: 93% (10073/10818) 2023-05-06T10:32:12.3961108Z Resolving deltas: 94% (10169/10818) 2023-05-06T10:32:12.3971302Z Resolving deltas: 95% (10279/10818) 2023-05-06T10:32:12.3977184Z Resolving deltas: 96% (10426/10818) 2023-05-06T10:32:12.3984116Z Resolving deltas: 97% (10511/10818) 2023-05-06T10:32:12.3998337Z Resolving deltas: 98% (10602/10818) 2023-05-06T10:32:12.4016899Z Resolving deltas: 99% (10729/10818) 2023-05-06T10:32:12.4017292Z Resolving deltas: 100% (10818/10818) 2023-05-06T10:32:12.4017673Z Resolving deltas: 100% (10818/10818), done. 2023-05-06T10:32:13.0343412Z + pushd torchbench 2023-05-06T10:32:13.0344077Z ~/workspace/torchbench ~/workspace 2023-05-06T10:32:13.0344405Z + git checkout a0848e19bad26ed92810b56616e93dbec0eeaa24 2023-05-06T10:32:13.0629361Z Note: checking out 'a0848e19bad26ed92810b56616e93dbec0eeaa24'. 2023-05-06T10:32:13.0629823Z 2023-05-06T10:32:13.0630209Z You are in 'detached HEAD' state. You can look around, make experimental 2023-05-06T10:32:13.0630619Z changes and commit them, and you can discard any commits you make in this 2023-05-06T10:32:13.0631016Z state without impacting any branches by performing another checkout. 2023-05-06T10:32:13.0631223Z 2023-05-06T10:32:13.0631390Z If you want to create a new branch to retain commits you create, you may 2023-05-06T10:32:13.0631838Z do so (now or later) by using -b with the checkout command again. Example: 2023-05-06T10:32:13.0632045Z 2023-05-06T10:32:13.0632222Z git checkout -b 2023-05-06T10:32:13.0632388Z 2023-05-06T10:32:13.0632535Z HEAD is now at a0848e19 Pin rapidfuzz dep to 2.15.1 (#1555) 2023-05-06T10:32:13.0635924Z + '[' '' ']' 2023-05-06T10:32:13.0636319Z + python install.py --continue_on_fail 2023-05-06T10:32:14.9137010Z checking packages torch, torchvision, torchtext, torchaudio are installed...OK 2023-05-06T10:32:15.4121624Z checking out input files from Amazon S3 ...Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/Background_Matting_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/Background_Matting_inputs.tar.gz 2023-05-06T10:32:15.9974912Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/coco128.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/coco128.tar.gz 2023-05-06T10:32:16.3462584Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/multi30k.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/multi30k.tar.gz 2023-05-06T10:32:17.1781728Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/tacotron2-minimal.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/tacotron2-minimal.tar.gz 2023-05-06T10:32:17.7158452Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/coco2017-minimal.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/coco2017-minimal.tar.gz 2023-05-06T10:32:17.9412296Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/pytorch_stargan_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/pytorch_stargan_inputs.tar.gz 2023-05-06T10:32:18.4073172Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/LearningToPaint_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/LearningToPaint_inputs.tar.gz 2023-05-06T10:32:18.7886468Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/pytorch_CycleGAN_and_pix2pix_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/pytorch_CycleGAN_and_pix2pix_inputs.tar.gz 2023-05-06T10:32:19.5744582Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/Super_SloMo_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/Super_SloMo_inputs.tar.gz 2023-05-06T10:32:19.9105597Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/speech_transformer_inputs.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/speech_transformer_inputs.tar.gz 2023-05-06T10:32:20.5832615Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/data/Reddit_minimal.tar.gz to /var/lib/jenkins/workspace/torchbench/torchbenchmark/data/Reddit_minimal.tar.gz 2023-05-06T10:32:20.8613898Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/models/drq/obs.pkl to /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/drq/obs.pkl 2023-05-06T10:32:21.2775046Z Checking out https://ossci-datasets.s3.amazonaws.com/torchbench/models/maml_omniglot/batch.pt to /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml_omniglot/batch.pt 2023-05-06T10:32:21.2841595Z OK 2023-05-06T10:32:21.3457013Z decompressing input tarball: pytorch_CycleGAN_and_pix2pix_inputs.tar.gz...OK 2023-05-06T10:32:21.6051595Z decompressing input tarball: Super_SloMo_inputs.tar.gz...OK 2023-05-06T10:32:21.6167538Z decompressing input tarball: speech_transformer_inputs.tar.gz...OK 2023-05-06T10:32:21.7202313Z decompressing input tarball: Background_Matting_inputs.tar.gz...OK 2023-05-06T10:32:21.7986582Z decompressing input tarball: LearningToPaint_inputs.tar.gz...OK 2023-05-06T10:32:21.8775918Z decompressing input tarball: coco2017-minimal.tar.gz...OK 2023-05-06T10:32:22.0292604Z decompressing input tarball: coco128.tar.gz...OK 2023-05-06T10:32:22.4868131Z decompressing input tarball: tacotron2-minimal.tar.gz...OK 2023-05-06T10:32:22.4903675Z decompressing input tarball: pytorch_stargan_inputs.tar.gz...OK 2023-05-06T10:32:22.5211543Z decompressing input tarball: multi30k.tar.gz...OK 2023-05-06T10:32:22.7541211Z decompressing input tarball: Reddit_minimal.tar.gz...OK 2023-05-06T10:32:48.8626207Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/BERT_pytorch...OK 2023-05-06T10:32:57.9951106Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/Background_Matting...OK 2023-05-06T10:33:15.2191613Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/DALLE2_pytorch...OK 2023-05-06T10:33:17.2834103Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/LearningToPaint...OK 2023-05-06T10:33:19.3531373Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/Super_SloMo...OK 2023-05-06T10:33:19.3749541Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/alexnet...OK 2023-05-06T10:33:52.1574595Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/attention_is_all_you_need_pytorch...OK 2023-05-06T10:33:54.3712206Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/dcgan...OK 2023-05-06T10:33:58.7185001Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/demucs...OK 2023-05-06T10:33:58.7404267Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/densenet121...OK 2023-05-06T10:35:18.6693022Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_101_c4...OK 2023-05-06T10:35:30.8455733Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_101_dc5...OK 2023-05-06T10:35:41.1589549Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_101_fpn...OK 2023-05-06T10:35:51.1578491Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_50_c4...OK 2023-05-06T10:36:02.7537020Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_50_dc5...OK 2023-05-06T10:36:12.7173480Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fasterrcnn_r_50_fpn...OK 2023-05-06T10:36:21.6715605Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_fcos_r_50_fpn...OK 2023-05-06T10:36:31.7370133Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_maskrcnn...OK 2023-05-06T10:36:41.9047434Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_maskrcnn_r_101_c4...OK 2023-05-06T10:36:52.0938656Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_maskrcnn_r_101_fpn...OK 2023-05-06T10:37:03.0802046Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_maskrcnn_r_50_c4...OK 2023-05-06T10:37:12.9796495Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/detectron2_maskrcnn_r_50_fpn...OK 2023-05-06T10:37:27.1119405Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/dlrm...OK 2023-05-06T10:38:18.5999796Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/doctr_det_predictor...OK 2023-05-06T10:38:44.2804711Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/doctr_reco_predictor...OK 2023-05-06T10:38:46.7974813Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/drq...OK 2023-05-06T10:39:17.3257500Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/fambench_xlmr...OK 2023-05-06T10:39:23.8078511Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/fastNLP_Bert...OK 2023-05-06T10:39:26.3352645Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/functorch_dp_cifar10...OK 2023-05-06T10:39:28.8372627Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/functorch_maml_omniglot...OK 2023-05-06T10:39:32.8965963Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/gat...OK 2023-05-06T10:39:35.5026932Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/gcn...OK 2023-05-06T10:39:44.5171714Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Albert...OK 2023-05-06T10:39:53.9728704Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Bart...OK 2023-05-06T10:40:01.4350026Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Bert...OK 2023-05-06T10:40:11.6195387Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Bert_large...OK 2023-05-06T10:40:19.5726504Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_BigBird...OK 2023-05-06T10:40:26.4749801Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_DistilBert...OK 2023-05-06T10:40:34.7515763Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_GPT2...OK 2023-05-06T10:40:55.0234575Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_GPT2_large...OK 2023-05-06T10:41:03.5330280Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Longformer...OK 2023-05-06T10:41:09.1452259Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_Reformer...OK 2023-05-06T10:41:15.8308623Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_T5...OK 2023-05-06T10:41:24.9845324Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_T5_base...OK 2023-05-06T10:41:41.8916750Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/hf_T5_large...OK 2023-05-06T10:41:44.5336590Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/lennard_jones...OK 2023-05-06T10:41:47.2196757Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/llama...OK 2023-05-06T10:41:47.2429866Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml...OK 2023-05-06T10:41:50.0520258Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml_omniglot...OK 2023-05-06T10:41:50.0751319Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mnasnet1_0...OK 2023-05-06T10:41:50.0981228Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2...OK 2023-05-06T10:41:50.1209472Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat...OK 2023-05-06T10:41:50.1439413Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v3_large...OK 2023-05-06T10:41:50.1669388Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco...OK 2023-05-06T10:41:52.8186693Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/nvidia_deeprecommender...OK 2023-05-06T10:41:57.6458236Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/opacus_cifar10...OK 2023-05-06T10:41:57.6692078Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/phlippe_densenet...OK 2023-05-06T10:41:57.6917637Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/phlippe_resnet...OK 2023-05-06T10:41:57.7147497Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pyhpc_equation_of_state...OK 2023-05-06T10:41:57.7378464Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pyhpc_isoneutral_mixing...OK 2023-05-06T10:41:57.7602317Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pyhpc_turbulent_kinetic_energy...OK 2023-05-06T10:42:06.3976045Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pytorch_CycleGAN_and_pix2pix...OK 2023-05-06T10:42:09.0208218Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pytorch_stargan...OK 2023-05-06T10:42:20.1429882Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pytorch_struct...OK 2023-05-06T10:42:22.9786097Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/pytorch_unet...OK 2023-05-06T10:42:23.0165231Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/resnet152...OK 2023-05-06T10:42:23.0402008Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/resnet18...OK 2023-05-06T10:42:23.0639023Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/resnet50...OK 2023-05-06T10:42:23.0872000Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/resnet50_quantized_qat...OK 2023-05-06T10:42:23.1105137Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/resnext50_32x4d...OK 2023-05-06T10:42:25.9383638Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/sage...OK 2023-05-06T10:42:25.9623277Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/shufflenet_v2_x1_0...OK 2023-05-06T10:42:35.2273477Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/soft_actor_critic...OK 2023-05-06T10:42:38.2020808Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/speech_transformer...OK 2023-05-06T10:42:38.2253691Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/squeezenet1_1...OK 2023-05-06T10:42:41.7136732Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/tacotron2...OK 2023-05-06T10:42:59.6072868Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_efficientdet...OK 2023-05-06T10:42:59.6313085Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_efficientnet...OK 2023-05-06T10:42:59.6549058Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_nfnet...OK 2023-05-06T10:42:59.6790134Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_regnet...OK 2023-05-06T10:42:59.7025594Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_resnest...OK 2023-05-06T10:42:59.7264061Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_vision_transformer...OK 2023-05-06T10:42:59.7501714Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_vision_transformer_large...OK 2023-05-06T10:42:59.7736769Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/timm_vovnet...OK 2023-05-06T10:43:02.8640641Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/torchrec_dlrm...OK 2023-05-06T10:43:24.1020472Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/tts_angular...OK 2023-05-06T10:43:24.1257132Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/vgg16...OK 2023-05-06T10:43:27.0479844Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/vision_maskrcnn...OK 2023-05-06T10:43:30.1015633Z running setup for /var/lib/jenkins/workspace/torchbench/torchbenchmark/models/yolov3...OK 2023-05-06T10:43:30.4880773Z + popd 2023-05-06T10:43:30.4881606Z ~/workspace 2023-05-06T10:43:30.4891737Z ++ pwd 2023-05-06T10:43:30.4896254Z + PYTHONPATH=/var/lib/jenkins/workspace/torchbench 2023-05-06T10:43:30.4896596Z + test_dynamo_benchmark torchbench 1 2023-05-06T10:43:30.4903222Z ++ pwd 2023-05-06T10:43:30.4908494Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2023-05-06T10:43:30.4909006Z + local suite=torchbench 2023-05-06T10:43:30.4909393Z + shift 2023-05-06T10:43:30.4911462Z + local shard_id=1 2023-05-06T10:43:30.4911818Z + shift 2023-05-06T10:43:30.4912291Z + [[ inductor_torchbench_perf == *perf_compare* ]] 2023-05-06T10:43:30.4912767Z + [[ inductor_torchbench_perf == *perf* ]] 2023-05-06T10:43:30.4913291Z + test_single_dynamo_benchmark dashboard torchbench 1 2023-05-06T10:43:30.4914211Z ++ pwd 2023-05-06T10:43:30.4918024Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2023-05-06T10:43:30.4918782Z + mkdir -p /var/lib/jenkins/workspace/test/test-reports 2023-05-06T10:43:30.4940347Z + local name=dashboard 2023-05-06T10:43:30.4940730Z + shift 2023-05-06T10:43:30.4941091Z + local suite=torchbench 2023-05-06T10:43:30.4941473Z + shift 2023-05-06T10:43:30.4943443Z + local shard_id=1 2023-05-06T10:43:30.4943808Z + shift 2023-05-06T10:43:30.4944180Z + partition_flags=() 2023-05-06T10:43:30.4944425Z + local partition_flags 2023-05-06T10:43:30.4945130Z + [[ -n 3 ]] 2023-05-06T10:43:30.4947058Z + [[ -n 1 ]] 2023-05-06T10:43:30.4947742Z + partition_flags=(--total-partitions "$NUM_TEST_SHARDS" --partition-id "$shard_id") 2023-05-06T10:43:30.4948428Z + [[ inductor_torchbench_perf == *perf_compare* ]] 2023-05-06T10:43:30.4948744Z + [[ inductor_torchbench_perf == *perf* ]] 2023-05-06T10:43:30.4949222Z + test_perf_for_dashboard torchbench --device cuda --total-partitions 3 --partition-id 1 2023-05-06T10:43:30.4949546Z ++ pwd 2023-05-06T10:43:30.4951062Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2023-05-06T10:43:30.4951472Z + mkdir -p /var/lib/jenkins/workspace/test/test-reports 2023-05-06T10:43:30.4967249Z + local suite=torchbench 2023-05-06T10:43:30.4967477Z + shift 2023-05-06T10:43:30.4967679Z + dtype=amp 2023-05-06T10:43:30.4967874Z + backend=inductor 2023-05-06T10:43:30.4968446Z + for mode in inference training 2023-05-06T10:43:30.4968746Z + [[ inductor_torchbench_perf == *max_autotune* ]] 2023-05-06T10:43:30.4969726Z + python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_no_cudagraphs_torchbench_amp_inference_cuda_accuracy.csv 2023-05-06T10:43:41.7817177Z cuda eval functorch_maml_omniglot pass 2023-05-06T10:44:01.5631783Z cuda eval hf_Albert pass 2023-05-06T10:44:33.4679602Z cuda eval hf_Bart pass 2023-05-06T10:44:56.3942074Z cuda eval hf_Bert pass 2023-05-06T10:45:33.3903297Z cuda eval hf_Bert_large pass 2023-05-06T10:45:42.4429879Z cuda eval hf_BigBird WARNING:common:fp64 golden ref were not generated for hf_BigBird. Setting accuracy check to cosine 2023-05-06T10:46:54.6916909Z pass 2023-05-06T10:47:12.0714982Z cuda eval hf_DistilBert pass 2023-05-06T10:47:35.9721763Z cuda eval hf_GPT2 pass 2023-05-06T10:47:57.2140813Z cuda eval hf_GPT2_large pass_due_to_skip 2023-05-06T10:48:05.5098250Z cuda eval hf_Longformer WARNING:common:fp64 golden ref were not generated for hf_Longformer. Setting accuracy check to cosine 2023-05-06T10:49:01.2119069Z pass 2023-05-06T10:49:29.1880950Z cuda eval hf_Reformer pass 2023-05-06T10:49:36.2110352Z cuda eval hf_T5 WARNING:common:fp64 golden ref were not generated for hf_T5. Setting accuracy check to cosine 2023-05-06T10:50:00.4806737Z pass 2023-05-06T10:50:10.2840210Z Eager model failed to run 2023-05-06T10:50:10.2851261Z Traceback (most recent call last): 2023-05-06T10:50:10.2851683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T10:50:10.2857048Z self.model_iter_fn(model, example_inputs) 2023-05-06T10:50:10.2857614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T10:50:10.2857973Z return mod(*inputs) 2023-05-06T10:50:10.2859559Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2859926Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2860441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2860798Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2861222Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T10:50:10.2861653Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T10:50:10.2862237Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2862612Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2864992Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2865383Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2865922Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1704, in forward 2023-05-06T10:50:10.2866291Z decoder_outputs = self.decoder( 2023-05-06T10:50:10.2866812Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2867171Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2867675Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2868356Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2868872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1074, in forward 2023-05-06T10:50:10.2869244Z layer_outputs = layer_module( 2023-05-06T10:50:10.2869760Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2870131Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2870619Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2870972Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2871528Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward 2023-05-06T10:50:10.2871912Z self_attention_outputs = self.layer[0]( 2023-05-06T10:50:10.2872448Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2872815Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2873315Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2873659Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2874172Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward 2023-05-06T10:50:10.2874546Z attention_output = self.SelfAttention( 2023-05-06T10:50:10.2875106Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T10:50:10.2875473Z return self._call_impl(*args, **kwargs) 2023-05-06T10:50:10.2875965Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T10:50:10.2876314Z return forward_call(*args, **kwargs) 2023-05-06T10:50:10.2876986Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 560, in forward 2023-05-06T10:50:10.2877510Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( 2023-05-06T10:50:10.2878377Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 389.06 MiB is free. Process 859708 has 39.01 GiB memory in use. Of the allocated memory 37.72 GiB is allocated by PyTorch, and 787.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T10:50:10.2879325Z 2023-05-06T10:50:10.2879496Z The above exception was the direct cause of the following exception: 2023-05-06T10:50:10.2879703Z 2023-05-06T10:50:10.2879804Z Traceback (most recent call last): 2023-05-06T10:50:10.2880156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T10:50:10.2880531Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T10:50:10.2880919Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T10:50:10.2881465Z self.validate_model(model, example_inputs) 2023-05-06T10:50:10.2881823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T10:50:10.2882202Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T10:50:10.2882520Z NotImplementedError: Eager model failed to run 2023-05-06T10:50:10.2882701Z 2023-05-06T10:50:10.2882815Z WARNING:root:hf_T5_base failed to load 2023-05-06T10:50:27.7880923Z cuda eval hf_T5_large pass_due_to_skip 2023-05-06T10:50:36.0060617Z cuda eval lennard_jones pass 2023-05-06T10:50:52.2453700Z cuda eval llama pass 2023-05-06T10:50:56.7356889Z cuda eval maml pass_due_to_skip 2023-05-06T10:51:05.2628432Z cuda eval maml_omniglot pass 2023-05-06T10:51:09.1645119Z Downloading: "https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth" to /var/lib/jenkins/.cache/torch/hub/checkpoints/mnasnet1.0_top1_73.512-f206786ef8.pth 2023-05-06T10:51:09.2140719Z 2023-05-06T10:51:09.3144428Z 0% 0.00/16.9M [00:00' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T10:52:48.9599468Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T10:52:49.3463754Z [2023-05-06 10:52:49,345] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T10:52:58.5635915Z pass 2023-05-06T10:53:12.2019872Z cuda eval nvidia_deeprecommender pass 2023-05-06T10:53:16.9932963Z cuda eval opacus_cifar10 [2023-05-06 10:53:16,992] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T10:53:16.9935437Z [2023-05-06 10:53:16,992] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T10:53:32.3148814Z pass 2023-05-06T10:53:50.6054352Z cuda eval phlippe_densenet pass 2023-05-06T10:54:02.0400622Z cuda eval phlippe_resnet pass 2023-05-06T10:54:02.9701472Z accuracy pass_rate=88.00% 2023-05-06T10:54:02.9702595Z calls_captured gmean=0.00x mean=447.360x 2023-05-06T10:54:02.9703076Z unique_graphs gmean=0.00x mean=10.760x 2023-05-06T10:54:02.9703823Z graph_breaks gmean=0.00x mean=8.880x 2023-05-06T10:54:02.9706824Z unique_graph_breaks gmean=0.00x mean=1.080x 2023-05-06T10:54:03.5125901Z + python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_with_cudagraphs_torchbench_amp_inference_cuda_accuracy.csv 2023-05-06T10:54:14.3818304Z cuda eval functorch_maml_omniglot pass 2023-05-06T10:54:34.0080485Z cuda eval hf_Albert pass 2023-05-06T10:55:07.5251629Z cuda eval hf_Bart pass 2023-05-06T10:55:30.1280156Z cuda eval hf_Bert pass 2023-05-06T10:56:07.2656836Z cuda eval hf_Bert_large pass 2023-05-06T10:56:16.2347513Z cuda eval hf_BigBird WARNING:common:fp64 golden ref were not generated for hf_BigBird. Setting accuracy check to cosine 2023-05-06T10:57:45.0724798Z pass 2023-05-06T10:58:01.8409468Z cuda eval hf_DistilBert pass 2023-05-06T10:58:24.9230714Z cuda eval hf_GPT2 pass 2023-05-06T10:58:45.9638674Z cuda eval hf_GPT2_large pass_due_to_skip 2023-05-06T10:58:54.4586227Z cuda eval hf_Longformer WARNING:common:fp64 golden ref were not generated for hf_Longformer. Setting accuracy check to cosine 2023-05-06T10:59:50.5998480Z pass 2023-05-06T11:00:07.0746547Z cuda eval hf_Reformer [2023-05-06 11:00:07,072] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:09.1820198Z [2023-05-06 11:00:09,181] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:10.5945791Z [2023-05-06 11:00:10,593] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:12.3574628Z [2023-05-06 11:00:12,356] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:13.5374256Z [2023-05-06 11:00:13,536] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:15.2971776Z [2023-05-06 11:00:15,296] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:00:21.2472102Z pass 2023-05-06T11:00:28.1904082Z cuda eval hf_T5 WARNING:common:fp64 golden ref were not generated for hf_T5. Setting accuracy check to cosine 2023-05-06T11:00:51.0036771Z pass 2023-05-06T11:01:00.6270912Z Eager model failed to run 2023-05-06T11:01:00.6281251Z Traceback (most recent call last): 2023-05-06T11:01:00.6281689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T11:01:00.6282046Z self.model_iter_fn(model, example_inputs) 2023-05-06T11:01:00.6282426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:01:00.6284623Z return mod(*inputs) 2023-05-06T11:01:00.6287656Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6288025Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6288545Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6289141Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6289549Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T11:01:00.6289980Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T11:01:00.6290660Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6291295Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6292819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6293418Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6294409Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1704, in forward 2023-05-06T11:01:00.6294780Z decoder_outputs = self.decoder( 2023-05-06T11:01:00.6295310Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6295667Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6296165Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6296517Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6297023Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1074, in forward 2023-05-06T11:01:00.6297377Z layer_outputs = layer_module( 2023-05-06T11:01:00.6297883Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6298258Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6298741Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6299093Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6299604Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward 2023-05-06T11:01:00.6299959Z self_attention_outputs = self.layer[0]( 2023-05-06T11:01:00.6300535Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6300905Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6301411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6301749Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6302268Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward 2023-05-06T11:01:00.6302644Z attention_output = self.SelfAttention( 2023-05-06T11:01:00.6303157Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:01:00.6303523Z return self._call_impl(*args, **kwargs) 2023-05-06T11:01:00.6304019Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:01:00.6304368Z return forward_call(*args, **kwargs) 2023-05-06T11:01:00.6304863Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 560, in forward 2023-05-06T11:01:00.6305385Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( 2023-05-06T11:01:00.6306241Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 389.06 MiB is free. Process 862666 has 39.01 GiB memory in use. Of the allocated memory 37.72 GiB is allocated by PyTorch, and 787.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T11:01:00.6307345Z 2023-05-06T11:01:00.6307516Z The above exception was the direct cause of the following exception: 2023-05-06T11:01:00.6307724Z 2023-05-06T11:01:00.6307839Z Traceback (most recent call last): 2023-05-06T11:01:00.6308158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T11:01:00.6308520Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T11:01:00.6309020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T11:01:00.6309366Z self.validate_model(model, example_inputs) 2023-05-06T11:01:00.6309734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T11:01:00.6310117Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T11:01:00.6310478Z NotImplementedError: Eager model failed to run 2023-05-06T11:01:00.6310660Z 2023-05-06T11:01:00.6310776Z WARNING:root:hf_T5_base failed to load 2023-05-06T11:01:18.0847953Z cuda eval hf_T5_large pass_due_to_skip 2023-05-06T11:01:26.1443684Z cuda eval lennard_jones pass 2023-05-06T11:01:41.3803746Z cuda eval llama [2023-05-06 11:01:41,378] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:01:41.6407246Z pass 2023-05-06T11:01:46.1566063Z cuda eval maml pass_due_to_skip 2023-05-06T11:01:54.7778391Z cuda eval maml_omniglot pass 2023-05-06T11:02:12.2274867Z cuda eval mnasnet1_0 pass 2023-05-06T11:02:30.4143098Z cuda eval mobilenet_v2 pass 2023-05-06T11:02:34.5314750Z The eval test only supports CPU. 2023-05-06T11:02:34.5318385Z Traceback (most recent call last): 2023-05-06T11:02:34.5318802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T11:02:34.5319169Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T11:02:34.5319592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T11:02:34.5319924Z benchmark = benchmark_cls( 2023-05-06T11:02:34.5326298Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T11:02:34.5326965Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T11:02:34.5328215Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T11:02:34.5328662Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T11:02:34.5329020Z NotImplementedError: The eval test only supports CPU. 2023-05-06T11:02:34.5329214Z 2023-05-06T11:02:34.5329358Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T11:02:52.9705472Z cuda eval mobilenet_v3_large pass 2023-05-06T11:02:59.8744556Z cuda eval moco [2023-05-06 11:02:59,873] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T11:03:10.9842042Z [2023-05-06 11:03:10,982] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:03:34.2521162Z [2023-05-06 11:03:34,250] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T11:03:34.2521940Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T11:03:34.2522927Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T11:03:34.4594986Z [2023-05-06 11:03:34,458] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T11:03:34.4626059Z [2023-05-06 11:03:34,462] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T11:03:42.1422313Z [2023-05-06 11:03:42,141] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:03:42.3089909Z [2023-05-06 11:03:42,308] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:03:42.3303049Z pass 2023-05-06T11:03:55.8858452Z cuda eval nvidia_deeprecommender pass 2023-05-06T11:04:00.7317038Z cuda eval opacus_cifar10 [2023-05-06 11:04:00,730] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T11:04:00.7319189Z [2023-05-06 11:04:00,730] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T11:04:04.6691066Z [2023-05-06 11:04:04,667] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:05.6412001Z [2023-05-06 11:04:05,640] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:06.6062764Z [2023-05-06 11:04:06,605] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:06.6543764Z [2023-05-06 11:04:06,653] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:06.7043501Z [2023-05-06 11:04:06,703] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:07.6776405Z [2023-05-06 11:04:07,676] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:08.6492412Z [2023-05-06 11:04:08,648] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:08.6951989Z [2023-05-06 11:04:08,694] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:09.6510307Z [2023-05-06 11:04:09,650] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:10.6122026Z [2023-05-06 11:04:10,611] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:11.5825727Z [2023-05-06 11:04:11,581] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:11.6288091Z [2023-05-06 11:04:11,628] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:12.5914900Z [2023-05-06 11:04:12,590] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:13.5673688Z [2023-05-06 11:04:13,566] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:14.5085632Z [2023-05-06 11:04:14,507] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:14.5537345Z [2023-05-06 11:04:14,553] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:14.6101930Z [2023-05-06 11:04:14,609] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T11:04:14.9904896Z pass 2023-05-06T11:04:32.6490484Z cuda eval phlippe_densenet pass 2023-05-06T11:04:44.0937459Z cuda eval phlippe_resnet pass 2023-05-06T11:04:45.0669646Z accuracy pass_rate=88.00% 2023-05-06T11:04:45.0674878Z calls_captured gmean=0.00x mean=447.360x 2023-05-06T11:04:45.0675937Z unique_graphs gmean=0.00x mean=10.760x 2023-05-06T11:04:45.0676384Z graph_breaks gmean=0.00x mean=8.880x 2023-05-06T11:04:45.0676836Z unique_graph_breaks gmean=0.00x mean=1.080x 2023-05-06T11:04:45.6361543Z + python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --dynamic-shapes --dynamic-batch-only --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_dynamic_torchbench_amp_inference_cuda_accuracy.csv 2023-05-06T11:04:56.4550541Z cuda eval functorch_maml_omniglot pass 2023-05-06T11:05:25.8444159Z cuda eval hf_Albert pass 2023-05-06T11:06:03.2193417Z cuda eval hf_Bart ERROR:common:Constraints violated! 2023-05-06T11:06:03.2196156Z 1. Could not validate constraint UnspecConstraint(L['decoder_input_ids'].size()[0]) as L['decoder_input_ids'].size()[0] is actually a non-atomic symbolic expression 4. Did you really mean to mark this dimension as dynamic? 2023-05-06T11:06:03.2197022Z 2023-05-06T11:06:03.2197032Z 2023-05-06T11:06:03.2197346Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:06:03.2197827Z import torch._dynamo 2023-05-06T11:06:03.2198249Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:06:03.2198700Z Traceback (most recent call last): 2023-05-06T11:06:03.2199339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T11:06:03.2200109Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T11:06:03.2201282Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T11:06:03.2201716Z return fn(*args, **kwargs) 2023-05-06T11:06:03.2202057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T11:06:03.2202429Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T11:06:03.2202808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:06:03.2203124Z return mod(*inputs) 2023-05-06T11:06:03.2203618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:06:03.2203994Z return self._call_impl(*args, **kwargs) 2023-05-06T11:06:03.2204499Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:06:03.2204858Z return forward_call(*args, **kwargs) 2023-05-06T11:06:03.2205249Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T11:06:03.2205886Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T11:06:03.2207071Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:06:03.2207636Z return self._call_impl(*args, **kwargs) 2023-05-06T11:06:03.2208150Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:06:03.2208506Z return forward_call(*args, **kwargs) 2023-05-06T11:06:03.2209031Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1373, in forward 2023-05-06T11:06:03.2209369Z outputs = self.model( 2023-05-06T11:06:03.2209872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:06:03.2210236Z return self._call_impl(*args, **kwargs) 2023-05-06T11:06:03.2210731Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:06:03.2211083Z return forward_call(*args, **kwargs) 2023-05-06T11:06:03.2211935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1237, in forward 2023-05-06T11:06:03.2212300Z encoder_outputs = self.encoder( 2023-05-06T11:06:03.2212824Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T11:06:03.2213209Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T11:06:03.2213746Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T11:06:03.2214148Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T11:06:03.2214790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T11:06:03.2215130Z return fn(*args, **kwargs) 2023-05-06T11:06:03.2215644Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T11:06:03.2216043Z return _compile( 2023-05-06T11:06:03.2216534Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:06:03.2216864Z r = func(*args, **kwargs) 2023-05-06T11:06:03.2217347Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 470, in _compile 2023-05-06T11:06:03.2217697Z check_fn = CheckFunctionManager( 2023-05-06T11:06:03.2218192Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 747, in __init__ 2023-05-06T11:06:03.2218553Z guard.create(local_builder, global_builder) 2023-05-06T11:06:03.2219029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_guards.py", line 196, in create 2023-05-06T11:06:03.2219432Z return self.create_fn(self.source.select(local_builder, global_builder), self) 2023-05-06T11:06:03.2219981Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 516, in SHAPE_ENV 2023-05-06T11:06:03.2220359Z guards = output_graph.shape_env.produce_guards( 2023-05-06T11:06:03.2220908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2400, in produce_guards 2023-05-06T11:06:03.2221348Z raise ConstraintViolationError(f"Constraints violated!\n{err}") 2023-05-06T11:06:03.2221781Z torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated! 2023-05-06T11:06:03.2222602Z 1. Could not validate constraint UnspecConstraint(L['decoder_input_ids'].size()[0]) as L['decoder_input_ids'].size()[0] is actually a non-atomic symbolic expression 4. Did you really mean to mark this dimension as dynamic? 2023-05-06T11:06:03.2222974Z 2023-05-06T11:06:03.2222985Z 2023-05-06T11:06:03.2223149Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:06:03.2223436Z import torch._dynamo 2023-05-06T11:06:03.2223716Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:06:03.2223896Z 2023-05-06T11:06:03.2224054Z TorchDynamo optimized model failed to run because of following error 2023-05-06T11:06:03.2249957Z fail_to_run 2023-05-06T11:06:35.8774151Z cuda eval hf_Bert pass 2023-05-06T11:07:32.6560828Z cuda eval hf_Bert_large pass 2023-05-06T11:07:41.7868318Z cuda eval hf_BigBird WARNING:common:fp64 golden ref were not generated for hf_BigBird. Setting accuracy check to cosine 2023-05-06T11:07:49.7456093Z ERROR:common:backend='inductor' raised: 2023-05-06T11:07:49.7456492Z AssertionError: -1/2 2023-05-06T11:07:49.7456645Z 2023-05-06T11:07:49.7456651Z 2023-05-06T11:07:49.7456839Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:07:49.7463387Z import torch._dynamo 2023-05-06T11:07:49.7463978Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:07:49.7464368Z Traceback (most recent call last): 2023-05-06T11:07:49.7465283Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T11:07:49.7465676Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T11:07:49.7466373Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T11:07:49.7466715Z return fn(*args, **kwargs) 2023-05-06T11:07:49.7467042Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T11:07:49.7467411Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T11:07:49.7467788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:07:49.7468116Z return mod(*inputs) 2023-05-06T11:07:49.7468816Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7469202Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7469726Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7470131Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7471209Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2455, in forward 2023-05-06T11:07:49.7471789Z outputs = self.bert( 2023-05-06T11:07:49.7472610Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7473208Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7474085Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7474470Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7475019Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2103, in forward 2023-05-06T11:07:49.7475375Z to_mask = None 2023-05-06T11:07:49.7475851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7476221Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7476900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7477347Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7477933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1632, in forward 2023-05-06T11:07:49.7478301Z layer_outputs = layer_module( 2023-05-06T11:07:49.7478812Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7479168Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7479672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7480034Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7480601Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1484, in forward 2023-05-06T11:07:49.7480999Z self_attention_outputs = self.attention( 2023-05-06T11:07:49.7481531Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7481898Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7482382Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7482735Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7483279Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1397, in forward 2023-05-06T11:07:49.7483859Z self_outputs = self.self( 2023-05-06T11:07:49.7484372Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:07:49.7484738Z return self._call_impl(*args, **kwargs) 2023-05-06T11:07:49.7485235Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:07:49.7485570Z return forward_call(*args, **kwargs) 2023-05-06T11:07:49.7486111Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 470, in forward 2023-05-06T11:07:49.7486543Z context_layer, attention_probs = self.bigbird_block_sparse_attention( 2023-05-06T11:07:49.7487261Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T11:07:49.7487663Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T11:07:49.7488205Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T11:07:49.7488615Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T11:07:49.7489124Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T11:07:49.7489461Z return fn(*args, **kwargs) 2023-05-06T11:07:49.7489973Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T11:07:49.7490302Z return _compile( 2023-05-06T11:07:49.7490812Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7491145Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7491631Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T11:07:49.7492003Z out_code = transform_code_object(code, transform) 2023-05-06T11:07:49.7492585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T11:07:49.7492996Z transformations(instructions, code_options) 2023-05-06T11:07:49.7493497Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T11:07:49.7493825Z tracer.run() 2023-05-06T11:07:49.7494295Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T11:07:49.7494616Z super().run() 2023-05-06T11:07:49.7495068Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T11:07:49.7495388Z and self.step() 2023-05-06T11:07:49.7495863Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T11:07:49.7496201Z getattr(self, inst.opname)(inst) 2023-05-06T11:07:49.7496704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T11:07:49.7497092Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T11:07:49.7497633Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T11:07:49.7498034Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T11:07:49.7498399Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:07:49.7498698Z return func(*args, **kwds) 2023-05-06T11:07:49.7499206Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T11:07:49.7499599Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T11:07:49.7500095Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7500535Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7501062Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T11:07:49.7501490Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T11:07:49.7502055Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T11:07:49.7502441Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T11:07:49.7502986Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T11:07:49.7503369Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T11:07:49.7503979Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T11:07:49.7530028Z return compile_fx(*args, **kwargs) 2023-05-06T11:07:49.7531229Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T11:07:49.7531755Z return aot_autograd( 2023-05-06T11:07:49.7532657Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T11:07:49.7533272Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T11:07:49.7534166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T11:07:49.7534796Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T11:07:49.7535614Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7536177Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7537021Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T11:07:49.7537780Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:07:49.7538705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T11:07:49.7539304Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:07:49.7540263Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T11:07:49.7541015Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:07:49.7542024Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T11:07:49.7542723Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T11:07:49.7543619Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7544192Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7545055Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T11:07:49.7545654Z return inner_compile( 2023-05-06T11:07:49.7546470Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T11:07:49.7547157Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T11:07:49.7548038Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T11:07:49.7548610Z return fn(*args, **kwargs) 2023-05-06T11:07:49.7549109Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:07:49.7549587Z return func(*args, **kwds) 2023-05-06T11:07:49.7550435Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T11:07:49.7551489Z compiled_fn = graph.compile_to_fn() 2023-05-06T11:07:49.7552387Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T11:07:49.7553016Z return self.compile_to_module().call 2023-05-06T11:07:49.7553875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7554430Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7555447Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T11:07:49.7556043Z code, linemap = self.codegen() 2023-05-06T11:07:49.7557240Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 647, in codegen 2023-05-06T11:07:49.7557874Z return self.wrapper_code.generate() 2023-05-06T11:07:49.7558678Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:07:49.7559167Z r = func(*args, **kwargs) 2023-05-06T11:07:49.7559873Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 419, in generate 2023-05-06T11:07:49.7560404Z output_refs = self.get_output_refs() 2023-05-06T11:07:49.7561228Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 274, in wrapper 2023-05-06T11:07:49.7561558Z setattr(self, key, fn(self)) 2023-05-06T11:07:49.7562083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in get_output_refs 2023-05-06T11:07:49.7562501Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T11:07:49.7563063Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in 2023-05-06T11:07:49.7563456Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T11:07:49.7564001Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 2142, in codegen_reference 2023-05-06T11:07:49.7564404Z expr = pexpr(V.graph.sizevars.simplify(self.shape)) 2023-05-06T11:07:49.7564918Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint 2023-05-06T11:07:49.7565275Z return self._str(self._print(expr)) 2023-05-06T11:07:49.7565763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print 2023-05-06T11:07:49.7566120Z return printmethod(expr, **kwargs) 2023-05-06T11:07:49.7566627Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 191, in _print_Pow 2023-05-06T11:07:49.7566983Z assert exp == int(exp), exp 2023-05-06T11:07:49.7567410Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T11:07:49.7567758Z AssertionError: -1/2 2023-05-06T11:07:49.7567912Z 2023-05-06T11:07:49.7567918Z 2023-05-06T11:07:49.7568084Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:07:49.7568374Z import torch._dynamo 2023-05-06T11:07:49.7568637Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:07:49.7568816Z 2023-05-06T11:07:49.7568986Z TorchDynamo optimized model failed to run because of following error 2023-05-06T11:07:49.7569285Z fail_to_run 2023-05-06T11:08:11.9183606Z cuda eval hf_DistilBert pass 2023-05-06T11:08:45.5912340Z cuda eval hf_GPT2 pass 2023-05-06T11:09:06.9299715Z cuda eval hf_GPT2_large pass_due_to_skip 2023-05-06T11:09:15.3987736Z cuda eval hf_Longformer WARNING:common:fp64 golden ref were not generated for hf_Longformer. Setting accuracy check to cosine 2023-05-06T11:09:23.9126068Z [2023-05-06 11:09:23,911] torch._dynamo.variables.torch: [WARNING] Calling on only torch.SymInt arguments is not yet supported. 2023-05-06T11:09:23.9127806Z To support this behavior, we need to allow const-propping tensors that store symint data. 2023-05-06T11:09:23.9130519Z For now, dynamo will explicitly graph break when it encounters user code with this behavior. 2023-05-06T11:09:23.9130774Z 2023-05-06T11:09:29.3972191Z ERROR:common:backend='inductor' raised: 2023-05-06T11:09:29.3972744Z LoweringException: AttributeError: 'View' object has no attribute 'get_stride' 2023-05-06T11:09:29.3973066Z target: aten.sym_stride 2023-05-06T11:09:29.3973300Z args[0]: TensorBox( 2023-05-06T11:09:29.3973508Z View( 2023-05-06T11:09:29.3973680Z View( 2023-05-06T11:09:29.3980329Z PermuteView(data=PermuteView(data=View( 2023-05-06T11:09:29.3980772Z StorageBox( 2023-05-06T11:09:29.3981578Z Pointwise( 2023-05-06T11:09:29.3982239Z 'cuda', 2023-05-06T11:09:29.3982597Z torch.float16, 2023-05-06T11:09:29.3982979Z def inner_fn(index): 2023-05-06T11:09:29.3983372Z i0, i1, i2 = index 2023-05-06T11:09:29.3983722Z tmp0 = ops.load(buf1, i2 + 768 * i1 + 768 * i0 * s0) 2023-05-06T11:09:29.3984197Z tmp1 = ops.load(arg1_1, i2) 2023-05-06T11:09:29.3984537Z tmp2 = tmp0 + tmp1 2023-05-06T11:09:29.3984997Z tmp3 = ops.constant(8.0, torch.float16) 2023-05-06T11:09:29.3985422Z tmp4 = tmp2 / tmp3 2023-05-06T11:09:29.3985786Z return tmp4 2023-05-06T11:09:29.3986166Z , 2023-05-06T11:09:29.3986453Z ranges=[4096, s0, 768], 2023-05-06T11:09:29.3986872Z origin_node=div, 2023-05-06T11:09:29.3987238Z origins={add, div} 2023-05-06T11:09:29.3987584Z ) 2023-05-06T11:09:29.3987943Z ), 2023-05-06T11:09:29.3988239Z size=(4096, s0, 12, 64), 2023-05-06T11:09:29.3988695Z reindex=lambda i0, i1, i2, i3: [i0, i1, 64*i2 + i3], 2023-05-06T11:09:29.3989110Z origins={add, div, view_6} 2023-05-06T11:09:29.3989558Z ), dims=[1, 0, 2, 3]), dims=[0, 2, 1, 3]), 2023-05-06T11:09:29.3989890Z size=(12*s0, 4096, 64), 2023-05-06T11:09:29.3990546Z reindex=lambda i0, i1, i2: [ModularIndexing(i0, 12, s0), ModularIndexing(i0, 1, 12), i1, i2], 2023-05-06T11:09:29.3991025Z origins={view_8} 2023-05-06T11:09:29.3991405Z ), 2023-05-06T11:09:29.3991743Z size=(12*s0, 8, 512, 64), 2023-05-06T11:09:29.3992205Z reindex=lambda i0, i1, i2, i3: [i0, 512*i1 + i2, i3], 2023-05-06T11:09:29.3992593Z origins={view_10} 2023-05-06T11:09:29.3992932Z ) 2023-05-06T11:09:29.3993256Z ) 2023-05-06T11:09:29.3993505Z args[1]: 1 2023-05-06T11:09:29.3993712Z 2023-05-06T11:09:29.3993723Z 2023-05-06T11:09:29.3994041Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:09:29.3994494Z import torch._dynamo 2023-05-06T11:09:29.3994998Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:09:29.3995420Z Traceback (most recent call last): 2023-05-06T11:09:29.3996020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T11:09:29.3996946Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T11:09:29.3998022Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T11:09:29.3998548Z return fn(*args, **kwargs) 2023-05-06T11:09:29.3999141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T11:09:29.3999774Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T11:09:29.4000427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:09:29.4000965Z return mod(*inputs) 2023-05-06T11:09:29.4001847Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:09:29.4002802Z return self._call_impl(*args, **kwargs) 2023-05-06T11:09:29.4003672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:09:29.4004335Z return forward_call(*args, **kwargs) 2023-05-06T11:09:29.4005286Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T11:09:29.4005907Z outputs = self.longformer( 2023-05-06T11:09:29.4006783Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:09:29.4007391Z return self._call_impl(*args, **kwargs) 2023-05-06T11:09:29.4008313Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:09:29.4008981Z return forward_call(*args, **kwargs) 2023-05-06T11:09:29.4009571Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T11:09:29.4010216Z encoder_outputs = self.encoder( 2023-05-06T11:09:29.4010872Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:09:29.4011242Z return self._call_impl(*args, **kwargs) 2023-05-06T11:09:29.4011745Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:09:29.4012152Z return forward_call(*args, **kwargs) 2023-05-06T11:09:29.4012789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T11:09:29.4013218Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T11:09:29.4013834Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1326, in 2023-05-06T11:09:29.4014220Z layer_outputs = layer_module( 2023-05-06T11:09:29.4014726Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:09:29.4015096Z return self._call_impl(*args, **kwargs) 2023-05-06T11:09:29.4015595Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:09:29.4015951Z return forward_call(*args, **kwargs) 2023-05-06T11:09:29.4016589Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T11:09:29.4016975Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T11:09:29.4017495Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T11:09:29.4017889Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T11:09:29.4018410Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T11:09:29.4018744Z return fn(*args, **kwargs) 2023-05-06T11:09:29.4019250Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T11:09:29.4019604Z return _compile( 2023-05-06T11:09:29.4020068Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:09:29.4020426Z r = func(*args, **kwargs) 2023-05-06T11:09:29.4020908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T11:09:29.4021290Z out_code = transform_code_object(code, transform) 2023-05-06T11:09:29.4021861Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T11:09:29.4022449Z transformations(instructions, code_options) 2023-05-06T11:09:29.4022980Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T11:09:29.4023467Z tracer.run() 2023-05-06T11:09:29.4023933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T11:09:29.4024259Z super().run() 2023-05-06T11:09:29.4024725Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T11:09:29.4025036Z and self.step() 2023-05-06T11:09:29.4025509Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T11:09:29.4025861Z getattr(self, inst.opname)(inst) 2023-05-06T11:09:29.4026514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T11:09:29.4026880Z self.output.compile_subgraph( 2023-05-06T11:09:29.4027408Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T11:09:29.4027833Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T11:09:29.4028191Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:09:29.4028490Z return func(*args, **kwds) 2023-05-06T11:09:29.4029009Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T11:09:29.4029406Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T11:09:29.4029896Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:09:29.4030301Z r = func(*args, **kwargs) 2023-05-06T11:09:29.4030823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T11:09:29.4031273Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T11:09:29.4031845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T11:09:29.4032243Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T11:09:29.4032790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T11:09:29.4033158Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T11:09:29.4033672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T11:09:29.4034027Z return compile_fx(*args, **kwargs) 2023-05-06T11:09:29.4034517Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T11:09:29.4034859Z return aot_autograd( 2023-05-06T11:09:29.4035357Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T11:09:29.4035754Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T11:09:29.4036304Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T11:09:29.4036979Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T11:09:29.4037541Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:09:29.4037864Z r = func(*args, **kwargs) 2023-05-06T11:09:29.4038408Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T11:09:29.4038882Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:09:29.4039476Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T11:09:29.4040078Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:09:29.4040747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T11:09:29.4041194Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:09:29.4041775Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T11:09:29.4042192Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T11:09:29.4042845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:09:29.4043183Z r = func(*args, **kwargs) 2023-05-06T11:09:29.4043674Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T11:09:29.4044102Z return inner_compile( 2023-05-06T11:09:29.4044610Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T11:09:29.4045003Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T11:09:29.4045515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T11:09:29.4045829Z return fn(*args, **kwargs) 2023-05-06T11:09:29.4046141Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:09:29.4046442Z return func(*args, **kwds) 2023-05-06T11:09:29.4046928Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner 2023-05-06T11:09:29.4047283Z graph.run(*example_inputs) 2023-05-06T11:09:29.4047765Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:09:29.4048099Z r = func(*args, **kwargs) 2023-05-06T11:09:29.4048547Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run 2023-05-06T11:09:29.4048878Z return super().run(*args) 2023-05-06T11:09:29.4049342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T11:09:29.4049673Z self.env[node] = self.run_node(node) 2023-05-06T11:09:29.4050196Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 488, in run_node 2023-05-06T11:09:29.4050539Z result = super().run_node(n) 2023-05-06T11:09:29.4051006Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T11:09:29.4051392Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T11:09:29.4051908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 392, in call_function 2023-05-06T11:09:29.4052321Z raise LoweringException(e, target, args, kwargs).with_traceback( 2023-05-06T11:09:29.4052851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 389, in call_function 2023-05-06T11:09:29.4053212Z out = lowerings[target](*args, **kwargs) 2023-05-06T11:09:29.4053705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 228, in wrapped 2023-05-06T11:09:29.4054050Z out = decomp_fn(*args, **kwargs) 2023-05-06T11:09:29.4054550Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 4036, in sym_stride 2023-05-06T11:09:29.4054888Z return a.get_stride()[dim] 2023-05-06T11:09:29.4055413Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3823, in __getattr__ 2023-05-06T11:09:29.4055739Z fn = getattr(self.data, name) 2023-05-06T11:09:29.4056167Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T11:09:29.4056799Z LoweringException: AttributeError: 'View' object has no attribute 'get_stride' 2023-05-06T11:09:29.4057098Z target: aten.sym_stride 2023-05-06T11:09:29.4057329Z args[0]: TensorBox( 2023-05-06T11:09:29.4057536Z View( 2023-05-06T11:09:29.4057712Z View( 2023-05-06T11:09:29.4057964Z PermuteView(data=PermuteView(data=View( 2023-05-06T11:09:29.4058222Z StorageBox( 2023-05-06T11:09:29.4058418Z Pointwise( 2023-05-06T11:09:29.4058673Z 'cuda', 2023-05-06T11:09:29.4058887Z torch.float16, 2023-05-06T11:09:29.4059105Z def inner_fn(index): 2023-05-06T11:09:29.4059335Z i0, i1, i2 = index 2023-05-06T11:09:29.4059693Z tmp0 = ops.load(buf1, i2 + 768 * i1 + 768 * i0 * s0) 2023-05-06T11:09:29.4059987Z tmp1 = ops.load(arg1_1, i2) 2023-05-06T11:09:29.4060260Z tmp2 = tmp0 + tmp1 2023-05-06T11:09:29.4060562Z tmp3 = ops.constant(8.0, torch.float16) 2023-05-06T11:09:29.4060973Z tmp4 = tmp2 / tmp3 2023-05-06T11:09:29.4061212Z return tmp4 2023-05-06T11:09:29.4061441Z , 2023-05-06T11:09:29.4061658Z ranges=[4096, s0, 768], 2023-05-06T11:09:29.4061877Z origin_node=div, 2023-05-06T11:09:29.4062197Z origins={add, div} 2023-05-06T11:09:29.4062544Z ) 2023-05-06T11:09:29.4062834Z ), 2023-05-06T11:09:29.4063103Z size=(4096, s0, 12, 64), 2023-05-06T11:09:29.4063560Z reindex=lambda i0, i1, i2, i3: [i0, i1, 64*i2 + i3], 2023-05-06T11:09:29.4063818Z origins={add, div, view_6} 2023-05-06T11:09:29.4064068Z ), dims=[1, 0, 2, 3]), dims=[0, 2, 1, 3]), 2023-05-06T11:09:29.4064317Z size=(12*s0, 4096, 64), 2023-05-06T11:09:29.4064631Z reindex=lambda i0, i1, i2: [ModularIndexing(i0, 12, s0), ModularIndexing(i0, 1, 12), i1, i2], 2023-05-06T11:09:29.4064941Z origins={view_8} 2023-05-06T11:09:29.4065145Z ), 2023-05-06T11:09:29.4065336Z size=(12*s0, 8, 512, 64), 2023-05-06T11:09:29.4065597Z reindex=lambda i0, i1, i2, i3: [i0, 512*i1 + i2, i3], 2023-05-06T11:09:29.4065847Z origins={view_10} 2023-05-06T11:09:29.4066048Z ) 2023-05-06T11:09:29.4066214Z ) 2023-05-06T11:09:29.4066398Z args[1]: 1 2023-05-06T11:09:29.4066524Z 2023-05-06T11:09:29.4066531Z 2023-05-06T11:09:29.4066697Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:09:29.4066975Z import torch._dynamo 2023-05-06T11:09:29.4067255Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:09:29.4067433Z 2023-05-06T11:09:29.4067600Z TorchDynamo optimized model failed to run because of following error 2023-05-06T11:09:29.4067877Z fail_to_run 2023-05-06T11:10:02.5005880Z cuda eval hf_Reformer pass 2023-05-06T11:10:09.7666219Z cuda eval hf_T5 WARNING:common:fp64 golden ref were not generated for hf_T5. Setting accuracy check to cosine 2023-05-06T11:10:34.4144691Z pass 2023-05-06T11:10:44.0663375Z Eager model failed to run 2023-05-06T11:10:44.0672250Z Traceback (most recent call last): 2023-05-06T11:10:44.0672904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T11:10:44.0673485Z self.model_iter_fn(model, example_inputs) 2023-05-06T11:10:44.0677845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:10:44.0678202Z return mod(*inputs) 2023-05-06T11:10:44.0679594Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0679981Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0680513Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0680999Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0682596Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T11:10:44.0683135Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T11:10:44.0683754Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0684125Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0684659Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0685015Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0686120Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1704, in forward 2023-05-06T11:10:44.0686707Z decoder_outputs = self.decoder( 2023-05-06T11:10:44.0687252Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0687764Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0688647Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0689086Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0689611Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1074, in forward 2023-05-06T11:10:44.0689951Z layer_outputs = layer_module( 2023-05-06T11:10:44.0690463Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0691011Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0691513Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0691869Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0692384Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward 2023-05-06T11:10:44.0692839Z self_attention_outputs = self.layer[0]( 2023-05-06T11:10:44.0693377Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0693747Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0694506Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0695224Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0695870Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward 2023-05-06T11:10:44.0696260Z attention_output = self.SelfAttention( 2023-05-06T11:10:44.0696818Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:10:44.0697178Z return self._call_impl(*args, **kwargs) 2023-05-06T11:10:44.0697673Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:10:44.0698028Z return forward_call(*args, **kwargs) 2023-05-06T11:10:44.0698541Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 560, in forward 2023-05-06T11:10:44.0699100Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( 2023-05-06T11:10:44.0699967Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 389.06 MiB is free. Process 864689 has 39.01 GiB memory in use. Of the allocated memory 37.72 GiB is allocated by PyTorch, and 787.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T11:10:44.0701353Z 2023-05-06T11:10:44.0701524Z The above exception was the direct cause of the following exception: 2023-05-06T11:10:44.0701732Z 2023-05-06T11:10:44.0701847Z Traceback (most recent call last): 2023-05-06T11:10:44.0702180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T11:10:44.0702535Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T11:10:44.0702922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T11:10:44.0703273Z self.validate_model(model, example_inputs) 2023-05-06T11:10:44.0703634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T11:10:44.0704002Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T11:10:44.0704475Z NotImplementedError: Eager model failed to run 2023-05-06T11:10:44.0704662Z 2023-05-06T11:10:44.0704776Z WARNING:root:hf_T5_base failed to load 2023-05-06T11:11:01.4288121Z cuda eval hf_T5_large pass_due_to_skip 2023-05-06T11:11:09.7187958Z cuda eval lennard_jones pass 2023-05-06T11:11:38.4081053Z cuda eval llama pass 2023-05-06T11:11:43.5103885Z cuda eval maml pass_due_to_skip 2023-05-06T11:11:52.6968819Z cuda eval maml_omniglot pass 2023-05-06T11:12:13.5639763Z cuda eval mnasnet1_0 pass 2023-05-06T11:12:37.1521651Z cuda eval mobilenet_v2 pass 2023-05-06T11:12:41.3972654Z The eval test only supports CPU. 2023-05-06T11:12:41.3975217Z Traceback (most recent call last): 2023-05-06T11:12:41.3975852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T11:12:41.3976271Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T11:12:41.3979357Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T11:12:41.3980000Z benchmark = benchmark_cls( 2023-05-06T11:12:41.3980650Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T11:12:41.3981350Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T11:12:41.3982107Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T11:12:41.3982904Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T11:12:41.3983262Z NotImplementedError: The eval test only supports CPU. 2023-05-06T11:12:41.3983456Z 2023-05-06T11:12:41.3983599Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T11:13:06.4780892Z cuda eval mobilenet_v3_large pass 2023-05-06T11:13:13.6322228Z cuda eval moco [2023-05-06 11:13:13,630] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T11:31:17.7612876Z [2023-05-06 11:31:17,758] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T11:31:17.7613821Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T11:31:17.7614782Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T11:31:18.3153530Z [2023-05-06 11:31:18,314] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T11:31:32.9022109Z ERROR:common:backend='compile_fn' raised: 2023-05-06T11:31:32.9022631Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T11:31:32.9022833Z 2023-05-06T11:31:32.9023118Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T11:31:32.9023483Z Original traceback: 2023-05-06T11:31:32.9023859Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T11:31:32.9027999Z self.queue_ptr[0] = ptr 2023-05-06T11:31:32.9028264Z 2023-05-06T11:31:32.9028270Z 2023-05-06T11:31:32.9028276Z 2023-05-06T11:31:32.9028532Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:31:32.9029023Z import torch._dynamo 2023-05-06T11:31:32.9029570Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:31:32.9030105Z Traceback (most recent call last): 2023-05-06T11:31:32.9030706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T11:31:32.9031361Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T11:31:32.9032686Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T11:31:32.9033023Z return fn(*args, **kwargs) 2023-05-06T11:31:32.9033364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T11:31:32.9033758Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T11:31:32.9034118Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:31:32.9034438Z return mod(*inputs) 2023-05-06T11:31:32.9034952Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:31:32.9035323Z return self._call_impl(*args, **kwargs) 2023-05-06T11:31:32.9035814Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:31:32.9036167Z return forward_call(*args, **kwargs) 2023-05-06T11:31:32.9036902Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T11:31:32.9037274Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T11:31:32.9037833Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T11:31:32.9038246Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T11:31:32.9038793Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:31:32.9039147Z return self._call_impl(*args, **kwargs) 2023-05-06T11:31:32.9039645Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:31:32.9040053Z return forward_call(*args, **kwargs) 2023-05-06T11:31:32.9040484Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T11:31:32.9040877Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T11:31:32.9041301Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T11:31:32.9041716Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T11:31:32.9042124Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 139, in 2023-05-06T11:31:32.9042495Z k = self._batch_unshuffle_ddp(k, idx_unshuffle) 2023-05-06T11:31:32.9042892Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 158, in 2023-05-06T11:31:32.9043249Z self._dequeue_and_enqueue(k) 2023-05-06T11:31:32.9043744Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T11:31:32.9044093Z return func(*args, **kwargs) 2023-05-06T11:31:32.9044484Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 55, in _dequeue_and_enqueue 2023-05-06T11:31:32.9044841Z keys = concat_all_gather(keys) 2023-05-06T11:31:32.9045424Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 59, in 2023-05-06T11:31:32.9045801Z ptr = int(self.queue_ptr) 2023-05-06T11:31:32.9046306Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T11:31:32.9046719Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T11:31:32.9047274Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T11:31:32.9047680Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T11:31:32.9048204Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T11:31:32.9048643Z return fn(*args, **kwargs) 2023-05-06T11:31:32.9049166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T11:31:32.9049524Z return _compile( 2023-05-06T11:31:32.9049975Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:31:32.9050305Z r = func(*args, **kwargs) 2023-05-06T11:31:32.9050837Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T11:31:32.9051213Z out_code = transform_code_object(code, transform) 2023-05-06T11:31:32.9051785Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T11:31:32.9052194Z transformations(instructions, code_options) 2023-05-06T11:31:32.9052720Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T11:31:32.9053033Z tracer.run() 2023-05-06T11:31:32.9053504Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T11:31:32.9053834Z super().run() 2023-05-06T11:31:32.9054302Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T11:31:32.9054612Z and self.step() 2023-05-06T11:31:32.9055087Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T11:31:32.9055437Z getattr(self, inst.opname)(inst) 2023-05-06T11:31:32.9055942Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T11:31:32.9056307Z self.output.compile_subgraph( 2023-05-06T11:31:32.9056848Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T11:31:32.9057273Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T11:31:32.9057629Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:31:32.9057940Z return func(*args, **kwds) 2023-05-06T11:31:32.9058462Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T11:31:32.9058837Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T11:31:32.9059337Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:31:32.9059666Z r = func(*args, **kwargs) 2023-05-06T11:31:32.9060168Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T11:31:32.9060626Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T11:31:32.9061201Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T11:31:32.9061599Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T11:31:32.9062272Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T11:31:32.9062676Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T11:31:32.9063221Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T11:31:32.9063610Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T11:31:32.9064119Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T11:31:32.9064476Z return compile_fx(*args, **kwargs) 2023-05-06T11:31:32.9065086Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T11:31:32.9065416Z return aot_autograd( 2023-05-06T11:31:32.9065913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T11:31:32.9066316Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T11:31:32.9066881Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T11:31:32.9067260Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T11:31:32.9067764Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:31:32.9068094Z r = func(*args, **kwargs) 2023-05-06T11:31:32.9068615Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2959, in create_aot_dispatcher_function 2023-05-06T11:31:32.9069040Z fw_metadata = run_functionalized_fw_and_collect_metadata( 2023-05-06T11:31:32.9069582Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 719, in inner 2023-05-06T11:31:32.9069924Z flat_f_outs = f(*flat_f_args) 2023-05-06T11:31:32.9070464Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3259, in functional_call 2023-05-06T11:31:32.9070865Z out = Interpreter(mod).run(*args[params_len:], **kwargs) 2023-05-06T11:31:32.9071371Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T11:31:32.9071703Z self.env[node] = self.run_node(node) 2023-05-06T11:31:32.9072188Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T11:31:32.9072562Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T11:31:32.9073076Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 267, in call_function 2023-05-06T11:31:32.9073413Z return target(*args, **kwargs) 2023-05-06T11:31:32.9073923Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T11:31:32.9074301Z return replace_fn(func)(*args, **kwargs) 2023-05-06T11:31:32.9074727Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T11:31:32.9075190Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T11:31:32.9075385Z 2023-05-06T11:31:32.9075633Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T11:31:32.9075988Z Original traceback: 2023-05-06T11:31:32.9076359Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T11:31:32.9076938Z self.queue_ptr[0] = ptr 2023-05-06T11:31:32.9077090Z 2023-05-06T11:31:32.9077096Z 2023-05-06T11:31:32.9077101Z 2023-05-06T11:31:32.9077277Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:31:32.9077569Z import torch._dynamo 2023-05-06T11:31:32.9077832Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:31:32.9078222Z 2023-05-06T11:31:32.9078393Z TorchDynamo optimized model failed to run because of following error 2023-05-06T11:31:32.9134435Z fail_to_run 2023-05-06T11:31:47.4234625Z cuda eval nvidia_deeprecommender pass 2023-05-06T11:31:52.5229163Z cuda eval opacus_cifar10 [2023-05-06 11:31:52,521] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T11:31:52.5231456Z [2023-05-06 11:31:52,521] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T11:32:08.6673954Z pass 2023-05-06T11:32:30.5655474Z cuda eval phlippe_densenet pass 2023-05-06T11:32:43.2544537Z cuda eval phlippe_resnet pass 2023-05-06T11:32:44.3752289Z accuracy pass_rate=72.00% 2023-05-06T11:32:44.3752633Z calls_captured gmean=0.00x mean=342.320x 2023-05-06T11:32:44.3753628Z unique_graphs gmean=0.00x mean=8.320x 2023-05-06T11:32:44.3755097Z graph_breaks gmean=0.00x mean=6.840x 2023-05-06T11:32:44.3758665Z unique_graph_breaks gmean=0.00x mean=0.800x 2023-05-06T11:32:45.0060144Z + [[ inference == \i\n\f\e\r\e\n\c\e ]] 2023-05-06T11:32:45.0062437Z + python benchmarks/dynamo/torchbench.py --accuracy --inference --amp --backend inductor --disable-cudagraphs --cpp-wrapper --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_cpp_wrapper_torchbench_amp_inference_cuda_accuracy.csv 2023-05-06T11:33:24.4990690Z cuda eval functorch_maml_omniglot pass 2023-05-06T11:34:22.8104983Z cuda eval hf_Albert pass 2023-05-06T11:37:14.7516253Z cuda eval hf_Bart pass 2023-05-06T11:38:18.3749029Z cuda eval hf_Bert pass 2023-05-06T11:39:55.0101703Z cuda eval hf_Bert_large pass 2023-05-06T11:40:04.0785395Z cuda eval hf_BigBird WARNING:common:fp64 golden ref were not generated for hf_BigBird. Setting accuracy check to cosine 2023-05-06T11:45:46.1699053Z pass 2023-05-06T11:46:35.8270864Z cuda eval hf_DistilBert pass 2023-05-06T11:47:36.9995277Z cuda eval hf_GPT2 pass 2023-05-06T11:47:58.0767638Z cuda eval hf_GPT2_large pass_due_to_skip 2023-05-06T11:48:06.4847795Z cuda eval hf_Longformer WARNING:common:fp64 golden ref were not generated for hf_Longformer. Setting accuracy check to cosine 2023-05-06T11:51:32.7328043Z pass 2023-05-06T11:56:41.8129802Z cuda eval hf_Reformer ERROR:common:backend='inductor' raised: 2023-05-06T11:56:41.8133599Z RuntimeError: Error building extension 'inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp -o main.o 2023-05-06T11:56:41.8139118Z FAILED: main.o 2023-05-06T11:56:41.8145347Z c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp -o main.o 2023-05-06T11:56:41.8150244Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:84: warning: multi-character character constant [-Wmultichar] 2023-05-06T11:56:41.8152809Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8153314Z ^~~~~~ 2023-05-06T11:56:41.8200279Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T11:56:41.8201563Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:28: error: expected identifier before numeric constant 2023-05-06T11:56:41.8202891Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8203359Z ^~ 2023-05-06T11:56:41.8204413Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:30: error: expected ‘]’ before ‘,’ token 2023-05-06T11:56:41.8205507Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8206061Z ^ 2023-05-06T11:56:41.8206737Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In lambda function: 2023-05-06T11:56:41.8208020Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:30: error: expected ‘{’ before ‘,’ token 2023-05-06T11:56:41.8210050Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T11:56:41.8211420Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:41: error: expected ‘)’ before ‘]’ token 2023-05-06T11:56:41.8212424Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8212890Z ^ 2023-05-06T11:56:41.8215088Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:56:17: error: unable to deduce ‘auto’ from ‘buf2’ 2023-05-06T11:56:41.8216504Z auto buf3 = buf2; 2023-05-06T11:56:41.8216828Z ^~~~ 2023-05-06T11:56:41.8217923Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:108:5: error: ‘aten’ was not declared in this scope 2023-05-06T11:56:41.8219014Z aten.scatter_(buf15, -1, buf13, at::as_strided(buf16, {4L, 12L, 4096L}, {0L, 0L, 1L}, 0L)) 2023-05-06T11:56:41.8219377Z ^~~~ 2023-05-06T11:56:41.8219985Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:108:5: note: suggested alternatives: 2023-05-06T11:56:41.8220968Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T11:56:41.8221839Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T11:56:41.8222643Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T11:56:41.8223514Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T11:56:41.8224405Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T11:56:41.8225412Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T11:56:41.8226489Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T11:56:41.8227497Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T11:56:41.8228446Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T11:56:41.8229414Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T11:56:41.8230348Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T11:56:41.8231300Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T11:56:41.8232264Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T11:56:41.8233059Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T11:56:41.8233799Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:1: 2023-05-06T11:56:41.8235220Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8235787Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8236096Z ^ 2023-05-06T11:56:41.8237141Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::namespaces::aten’ 2023-05-06T11:56:41.8237694Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8238013Z ^ 2023-05-06T11:56:41.8238831Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8239362Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8239680Z ^ 2023-05-06T11:56:41.8240722Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8241795Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8242758Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4:0, 2023-05-06T11:56:41.8243687Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T11:56:41.8244586Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T11:56:41.8245506Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T11:56:41.8246556Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T11:56:41.8247479Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T11:56:41.8248373Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T11:56:41.8249280Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T11:56:41.8250292Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T11:56:41.8251179Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T11:56:41.8252074Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T11:56:41.8252994Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T11:56:41.8253839Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T11:56:41.8254643Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:1: 2023-05-06T11:56:41.8255882Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:78:11: note: ‘torch::jit::aten’ 2023-05-06T11:56:41.8256412Z namespace aten { 2023-05-06T11:56:41.8256724Z ^~~~ 2023-05-06T11:56:41.8257092Z ninja: build stopped: subcommand failed. 2023-05-06T11:56:41.8257335Z 2023-05-06T11:56:41.8257344Z 2023-05-06T11:56:41.8257352Z 2023-05-06T11:56:41.8257607Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:56:41.8258056Z import torch._dynamo 2023-05-06T11:56:41.8258486Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:56:41.8258900Z Traceback (most recent call last): 2023-05-06T11:56:41.8259412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T11:56:41.8260301Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T11:56:41.8261129Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T11:56:41.8261566Z return fn(*args, **kwargs) 2023-05-06T11:56:41.8262036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T11:56:41.8262575Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T11:56:41.8263093Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T11:56:41.8263515Z return mod(*inputs) 2023-05-06T11:56:41.8264492Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8264990Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8265702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8266302Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8267083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2401, in forward 2023-05-06T11:56:41.8269843Z reformer_outputs = self.reformer( 2023-05-06T11:56:41.8270677Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8271233Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8271983Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8272518Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8273362Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2056, in forward 2023-05-06T11:56:41.8274018Z least_common_mult_chunk_length = _get_least_common_mult_chunk_len(self.config) 2023-05-06T11:56:41.8274951Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2057, in 2023-05-06T11:56:41.8275579Z min_chunk_length = _get_min_chunk_len(self.config) 2023-05-06T11:56:41.8276528Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2093, in 2023-05-06T11:56:41.8277272Z embedding_output = self.embeddings( 2023-05-06T11:56:41.8278135Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2100, in 2023-05-06T11:56:41.8278666Z encoder_outputs = self.encoder( 2023-05-06T11:56:41.8279378Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8279863Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8280544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8281033Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8281757Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1727, in forward 2023-05-06T11:56:41.8282308Z hidden_states = _ReversibleFunction.apply( 2023-05-06T11:56:41.8283093Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply 2023-05-06T11:56:41.8283677Z return super().apply(*args, **kwargs) # type: ignore[misc] 2023-05-06T11:56:41.8284544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1615, in forward 2023-05-06T11:56:41.8285110Z layer_outputs = layer( 2023-05-06T11:56:41.8285956Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8286827Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8287595Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8288134Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8288995Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1480, in forward 2023-05-06T11:56:41.8289570Z attn_outputs = self.attention( 2023-05-06T11:56:41.8290382Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8290958Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8291955Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8292514Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8293330Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1313, in forward 2023-05-06T11:56:41.8293877Z self_attention_outputs = self.self_attention( 2023-05-06T11:56:41.8294569Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T11:56:41.8295065Z return self._call_impl(*args, **kwargs) 2023-05-06T11:56:41.8295796Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T11:56:41.8296277Z return forward_call(*args, **kwargs) 2023-05-06T11:56:41.8296947Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T11:56:41.8297520Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T11:56:41.8298360Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T11:56:41.8298995Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T11:56:41.8299815Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T11:56:41.8300327Z return fn(*args, **kwargs) 2023-05-06T11:56:41.8301131Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T11:56:41.8301654Z return _compile( 2023-05-06T11:56:41.8302381Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:56:41.8335602Z r = func(*args, **kwargs) 2023-05-06T11:56:41.8336582Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T11:56:41.8337213Z out_code = transform_code_object(code, transform) 2023-05-06T11:56:41.8338102Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T11:56:41.8338756Z transformations(instructions, code_options) 2023-05-06T11:56:41.8339590Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T11:56:41.8340101Z tracer.run() 2023-05-06T11:56:41.8340845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T11:56:41.8341345Z super().run() 2023-05-06T11:56:41.8342137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T11:56:41.8342650Z and self.step() 2023-05-06T11:56:41.8343425Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T11:56:41.8343967Z getattr(self, inst.opname)(inst) 2023-05-06T11:56:41.8344695Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T11:56:41.8345668Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T11:56:41.8346568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T11:56:41.8347201Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T11:56:41.8347743Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:56:41.8348181Z return func(*args, **kwds) 2023-05-06T11:56:41.8348982Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T11:56:41.8349537Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T11:56:41.8350394Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:56:41.8350853Z r = func(*args, **kwargs) 2023-05-06T11:56:41.8351571Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T11:56:41.8352142Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T11:56:41.8353010Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T11:56:41.8353573Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T11:56:41.8354363Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T11:56:41.8355114Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T11:56:41.8355932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T11:56:41.8356450Z return compile_fx(*args, **kwargs) 2023-05-06T11:56:41.8357338Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T11:56:41.8357879Z return compile_fx_with_cpp_wrapper( 2023-05-06T11:56:41.8358637Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_with_cpp_wrapper 2023-05-06T11:56:41.8359134Z return compile_fx( 2023-05-06T11:56:41.8359903Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T11:56:41.8360447Z return aot_autograd( 2023-05-06T11:56:41.8361231Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T11:56:41.8361824Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T11:56:41.8362679Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T11:56:41.8363264Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T11:56:41.8364022Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:56:41.8364517Z r = func(*args, **kwargs) 2023-05-06T11:56:41.8365325Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T11:56:41.8366079Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:56:41.8366939Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T11:56:41.8367579Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:56:41.8368493Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T11:56:41.8369134Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T11:56:41.8370296Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T11:56:41.8370904Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T11:56:41.8371629Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:56:41.8372073Z r = func(*args, **kwargs) 2023-05-06T11:56:41.8372763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T11:56:41.8373269Z return inner_compile( 2023-05-06T11:56:41.8374004Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T11:56:41.8374787Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T11:56:41.8375576Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T11:56:41.8376139Z return fn(*args, **kwargs) 2023-05-06T11:56:41.8376559Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T11:56:41.8376973Z return func(*args, **kwds) 2023-05-06T11:56:41.8377705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T11:56:41.8378228Z compiled_fn = graph.compile_to_fn() 2023-05-06T11:56:41.8378943Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T11:56:41.8379470Z return self.compile_to_module().call 2023-05-06T11:56:41.8380210Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T11:56:41.8380692Z r = func(*args, **kwargs) 2023-05-06T11:56:41.8381420Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 695, in compile_to_module 2023-05-06T11:56:41.8382018Z mod = PyCodeCache.load(code, linemap=linemap) 2023-05-06T11:56:41.8382782Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 706, in load 2023-05-06T11:56:41.8383313Z return cls.load_by_key_path(key, path, linemap) 2023-05-06T11:56:41.8384105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 721, in load_by_key_path 2023-05-06T11:56:41.8384633Z exec(code, mod.__dict__, mod.__dict__) 2023-05-06T11:56:41.8385265Z File "/tmp/torchinductor_jenkins/le/cle22tvh5jhlj6snrsefypheas5zpwr3xn5i4os2n7zjokpv3dh2.py", line 147, in 2023-05-06T11:56:41.8385976Z module = load_inline( 2023-05-06T11:56:41.8386712Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1433, in load_inline 2023-05-06T11:56:41.8387208Z return _jit_compile( 2023-05-06T11:56:41.8387969Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile 2023-05-06T11:56:41.8388533Z _write_ninja_file_and_build_library( 2023-05-06T11:56:41.8389391Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1623, in _write_ninja_file_and_build_library 2023-05-06T11:56:41.8389884Z _run_ninja_build( 2023-05-06T11:56:41.8390579Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1910, in _run_ninja_build 2023-05-06T11:56:41.8391153Z raise RuntimeError(message) from e 2023-05-06T11:56:41.8391767Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T11:56:41.8397289Z RuntimeError: Error building extension 'inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp -o main.o 2023-05-06T11:56:41.8400490Z FAILED: main.o 2023-05-06T11:56:41.8405313Z c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp -o main.o 2023-05-06T11:56:41.8409172Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:84: warning: multi-character character constant [-Wmultichar] 2023-05-06T11:56:41.8410299Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8410788Z ^~~~~~ 2023-05-06T11:56:41.8412182Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T11:56:41.8413308Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:28: error: expected identifier before numeric constant 2023-05-06T11:56:41.8414357Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8414811Z ^~ 2023-05-06T11:56:41.8415815Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:30: error: expected ‘]’ before ‘,’ token 2023-05-06T11:56:41.8416865Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8417541Z ^ 2023-05-06T11:56:41.8418173Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In lambda function: 2023-05-06T11:56:41.8419496Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:30: error: expected ‘{’ before ‘,’ token 2023-05-06T11:56:41.8421177Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T11:56:41.8422890Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:55:41: error: expected ‘)’ before ‘]’ token 2023-05-06T11:56:41.8424024Z auto buf2 = at::randn([12, 64, 1, 64], dtype=torch.float16, device=device(type='cuda', index=0), pin_memory=False); 2023-05-06T11:56:41.8424493Z ^ 2023-05-06T11:56:41.8425542Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:56:17: error: unable to deduce ‘auto’ from ‘buf2’ 2023-05-06T11:56:41.8426406Z auto buf3 = buf2; 2023-05-06T11:56:41.8426730Z ^~~~ 2023-05-06T11:56:41.8427722Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:108:5: error: ‘aten’ was not declared in this scope 2023-05-06T11:56:41.8428773Z aten.scatter_(buf15, -1, buf13, at::as_strided(buf16, {4L, 12L, 4096L}, {0L, 0L, 1L}, 0L)) 2023-05-06T11:56:41.8429168Z ^~~~ 2023-05-06T11:56:41.8429815Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:108:5: note: suggested alternatives: 2023-05-06T11:56:41.8430849Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T11:56:41.8431778Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T11:56:41.8432668Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T11:56:41.8433498Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T11:56:41.8434338Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T11:56:41.8435282Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T11:56:41.8436301Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T11:56:41.8437421Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T11:56:41.8438366Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T11:56:41.8439344Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T11:56:41.8440402Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T11:56:41.8441325Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T11:56:41.8442383Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T11:56:41.8443671Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T11:56:41.8444548Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:1: 2023-05-06T11:56:41.8445940Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8446514Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8446869Z ^ 2023-05-06T11:56:41.8447799Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::namespaces::aten’ 2023-05-06T11:56:41.8448398Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8448724Z ^ 2023-05-06T11:56:41.8449754Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8450156Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T11:56:41.8450377Z ^ 2023-05-06T11:56:41.8450947Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8451612Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten’ 2023-05-06T11:56:41.8452243Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4:0, 2023-05-06T11:56:41.8452860Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T11:56:41.8453444Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T11:56:41.8454019Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T11:56:41.8454671Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T11:56:41.8455347Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T11:56:41.8456488Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T11:56:41.8457140Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T11:56:41.8457787Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T11:56:41.8458437Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T11:56:41.8459055Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T11:56:41.8459648Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T11:56:41.8460223Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T11:56:41.8460764Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cb7qjaffvkdfhbceciukbwpgv6g6yh73klz4n6mhymcn3zelrkcr/main.cpp:1: 2023-05-06T11:56:41.8461560Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:78:11: note: ‘torch::jit::aten’ 2023-05-06T11:56:41.8461887Z namespace aten { 2023-05-06T11:56:41.8462097Z ^~~~ 2023-05-06T11:56:41.8462353Z ninja: build stopped: subcommand failed. 2023-05-06T11:56:41.8462511Z 2023-05-06T11:56:41.8462517Z 2023-05-06T11:56:41.8462525Z 2023-05-06T11:56:41.8462855Z You can suppress this exception and fall back to eager by setting: 2023-05-06T11:56:41.8463154Z import torch._dynamo 2023-05-06T11:56:41.8463431Z torch._dynamo.config.suppress_errors = True 2023-05-06T11:56:41.8463613Z 2023-05-06T11:56:41.8463770Z TorchDynamo optimized model failed to run because of following error 2023-05-06T11:56:41.8464051Z fail_to_run 2023-05-06T11:56:48.9430053Z cuda eval hf_T5 WARNING:common:fp64 golden ref were not generated for hf_T5. Setting accuracy check to cosine 2023-05-06T12:02:49.0063493Z pass 2023-05-06T12:02:58.8683044Z Eager model failed to run 2023-05-06T12:02:58.8689582Z Traceback (most recent call last): 2023-05-06T12:02:58.8690121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T12:02:58.8692004Z self.model_iter_fn(model, example_inputs) 2023-05-06T12:02:58.8697295Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T12:02:58.8697895Z return mod(*inputs) 2023-05-06T12:02:58.8699319Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8699921Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8700916Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8702318Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8702768Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T12:02:58.8703272Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T12:02:58.8704021Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8704384Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8704894Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8705370Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8705903Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1704, in forward 2023-05-06T12:02:58.8706251Z decoder_outputs = self.decoder( 2023-05-06T12:02:58.8706764Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8707135Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8707620Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8707972Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8708494Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1074, in forward 2023-05-06T12:02:58.8708849Z layer_outputs = layer_module( 2023-05-06T12:02:58.8709345Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8709714Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8710210Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8710545Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8711050Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 693, in forward 2023-05-06T12:02:58.8711413Z self_attention_outputs = self.layer[0]( 2023-05-06T12:02:58.8711932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8712291Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8712851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8713603Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8714111Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 600, in forward 2023-05-06T12:02:58.8714492Z attention_output = self.SelfAttention( 2023-05-06T12:02:58.8715069Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:02:58.8715437Z return self._call_impl(*args, **kwargs) 2023-05-06T12:02:58.8715919Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:02:58.8716272Z return forward_call(*args, **kwargs) 2023-05-06T12:02:58.8717334Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 560, in forward 2023-05-06T12:02:58.8717877Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as( 2023-05-06T12:02:58.8718744Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 389.06 MiB is free. Process 869389 has 39.01 GiB memory in use. Of the allocated memory 37.72 GiB is allocated by PyTorch, and 787.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T12:02:58.8719622Z 2023-05-06T12:02:58.8719794Z The above exception was the direct cause of the following exception: 2023-05-06T12:02:58.8720005Z 2023-05-06T12:02:58.8720121Z Traceback (most recent call last): 2023-05-06T12:02:58.8720454Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T12:02:58.8720815Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T12:02:58.8721205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T12:02:58.8721567Z self.validate_model(model, example_inputs) 2023-05-06T12:02:58.8721914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T12:02:58.8722291Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T12:02:58.8722625Z NotImplementedError: Eager model failed to run 2023-05-06T12:02:58.8722806Z 2023-05-06T12:02:58.8722925Z WARNING:root:hf_T5_base failed to load 2023-05-06T12:03:16.3943957Z cuda eval hf_T5_large pass_due_to_skip 2023-05-06T12:03:52.5728462Z cuda eval lennard_jones pass 2023-05-06T12:04:11.0406098Z cuda eval llama ERROR:common:backend='inductor' raised: 2023-05-06T12:04:11.0406572Z AssertionError: slice.Tensor is not supported with cpp wrapper 2023-05-06T12:04:11.0406779Z 2023-05-06T12:04:11.0406786Z 2023-05-06T12:04:11.0406964Z You can suppress this exception and fall back to eager by setting: 2023-05-06T12:04:11.0407268Z import torch._dynamo 2023-05-06T12:04:11.0407532Z torch._dynamo.config.suppress_errors = True 2023-05-06T12:04:11.0407821Z Traceback (most recent call last): 2023-05-06T12:04:11.0410623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T12:04:11.0411271Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T12:04:11.0412280Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T12:04:11.0412752Z return fn(*args, **kwargs) 2023-05-06T12:04:11.0413312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T12:04:11.0413981Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T12:04:11.0415081Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T12:04:11.0415971Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T12:04:11.0417332Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T12:04:11.0417757Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T12:04:11.0418287Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T12:04:11.0418627Z return fn(*args, **kwargs) 2023-05-06T12:04:11.0419452Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T12:04:11.0420031Z return _compile( 2023-05-06T12:04:11.0420898Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:04:11.0421713Z r = func(*args, **kwargs) 2023-05-06T12:04:11.0422211Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T12:04:11.0422599Z out_code = transform_code_object(code, transform) 2023-05-06T12:04:11.0423192Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T12:04:11.0423606Z transformations(instructions, code_options) 2023-05-06T12:04:11.0424111Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T12:04:11.0424441Z tracer.run() 2023-05-06T12:04:11.0424914Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T12:04:11.0425224Z super().run() 2023-05-06T12:04:11.0425760Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T12:04:11.0426089Z and self.step() 2023-05-06T12:04:11.0426543Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T12:04:11.0426908Z getattr(self, inst.opname)(inst) 2023-05-06T12:04:11.0427433Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T12:04:11.0427805Z self.output.compile_subgraph( 2023-05-06T12:04:11.0428310Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T12:04:11.0428729Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T12:04:11.0429097Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T12:04:11.0429387Z return func(*args, **kwds) 2023-05-06T12:04:11.0429922Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T12:04:11.0430315Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T12:04:11.0430816Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:04:11.0431139Z r = func(*args, **kwargs) 2023-05-06T12:04:11.0431642Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T12:04:11.0432071Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T12:04:11.0432632Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T12:04:11.0433033Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T12:04:11.0433579Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T12:04:11.0433968Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T12:04:11.0434473Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T12:04:11.0434967Z return compile_fx(*args, **kwargs) 2023-05-06T12:04:11.0435505Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T12:04:11.0435853Z return compile_fx_with_cpp_wrapper( 2023-05-06T12:04:11.0436396Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_with_cpp_wrapper 2023-05-06T12:04:11.0436931Z return compile_fx( 2023-05-06T12:04:11.0437422Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T12:04:11.0437754Z return aot_autograd( 2023-05-06T12:04:11.0438244Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T12:04:11.0438770Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T12:04:11.0439332Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T12:04:11.0439735Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T12:04:11.0440244Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:04:11.0440576Z r = func(*args, **kwargs) 2023-05-06T12:04:11.0441105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T12:04:11.0441567Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:04:11.0442156Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T12:04:11.0442602Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:04:11.0443193Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T12:04:11.0443634Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:04:11.0444203Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T12:04:11.0444589Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T12:04:11.0445104Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:04:11.0445478Z r = func(*args, **kwargs) 2023-05-06T12:04:11.0445978Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T12:04:11.0446312Z return inner_compile( 2023-05-06T12:04:11.0446809Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T12:04:11.0447195Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T12:04:11.0447694Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T12:04:11.0448026Z return fn(*args, **kwargs) 2023-05-06T12:04:11.0448342Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T12:04:11.0448646Z return func(*args, **kwds) 2023-05-06T12:04:11.0449132Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner 2023-05-06T12:04:11.0449483Z graph.run(*example_inputs) 2023-05-06T12:04:11.0449964Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:04:11.0450279Z r = func(*args, **kwargs) 2023-05-06T12:04:11.0450743Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run 2023-05-06T12:04:11.0451073Z return super().run(*args) 2023-05-06T12:04:11.0451671Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T12:04:11.0452003Z self.env[node] = self.run_node(node) 2023-05-06T12:04:11.0452486Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 476, in run_node 2023-05-06T12:04:11.0452883Z result = fallback_handler(n.target, add_to_fallback_set=False)( 2023-05-06T12:04:11.0453396Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 1043, in handler 2023-05-06T12:04:11.0453812Z TensorBox.create, ir.FallbackKernel.create(kernel, *args, **kwargs) 2023-05-06T12:04:11.0454338Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3182, in create 2023-05-06T12:04:11.0454673Z packed = FallbackKernel( 2023-05-06T12:04:11.0455222Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3122, in __init__ 2023-05-06T12:04:11.0455600Z assert ( 2023-05-06T12:04:11.0456037Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T12:04:11.0456406Z AssertionError: slice.Tensor is not supported with cpp wrapper 2023-05-06T12:04:11.0456606Z 2023-05-06T12:04:11.0456612Z 2023-05-06T12:04:11.0456774Z You can suppress this exception and fall back to eager by setting: 2023-05-06T12:04:11.0457063Z import torch._dynamo 2023-05-06T12:04:11.0457324Z torch._dynamo.config.suppress_errors = True 2023-05-06T12:04:11.0457507Z 2023-05-06T12:04:11.0457675Z TorchDynamo optimized model failed to run because of following error 2023-05-06T12:04:11.0471946Z fail_to_run 2023-05-06T12:04:15.4760771Z cuda eval maml pass_due_to_skip 2023-05-06T12:04:52.4401164Z cuda eval maml_omniglot pass 2023-05-06T12:05:45.8289496Z cuda eval mnasnet1_0 pass 2023-05-06T12:06:40.5505830Z cuda eval mobilenet_v2 pass 2023-05-06T12:06:44.8450889Z The eval test only supports CPU. 2023-05-06T12:06:44.8452933Z Traceback (most recent call last): 2023-05-06T12:06:44.8453607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T12:06:44.8454129Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T12:06:44.8459223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T12:06:44.8459603Z benchmark = benchmark_cls( 2023-05-06T12:06:44.8460060Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T12:06:44.8460691Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T12:06:44.8461089Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T12:06:44.8461541Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T12:06:44.8461900Z NotImplementedError: The eval test only supports CPU. 2023-05-06T12:06:44.8462107Z 2023-05-06T12:06:44.8462254Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T12:07:40.9872324Z cuda eval mobilenet_v3_large pass 2023-05-06T12:07:48.1085289Z cuda eval moco [2023-05-06 12:07:48,107] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T12:16:21.8738912Z [2023-05-06 12:16:21,871] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T12:16:21.8739655Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T12:16:21.8740643Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T12:17:10.9724783Z [2023-05-06 12:17:10,970] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T12:17:30.1380857Z ERROR:common:backend='compile_fn' raised: 2023-05-06T12:17:30.1386246Z RuntimeError: Error building extension 'inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp -o main.o 2023-05-06T12:17:30.1391422Z FAILED: main.o 2023-05-06T12:17:30.1397489Z c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp -o main.o 2023-05-06T12:17:30.1401885Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:52: warning: multi-character character constant [-Wmultichar] 2023-05-06T12:17:30.1404082Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1404380Z ^~~~~ 2023-05-06T12:17:30.1405846Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T12:17:30.1407018Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:33: error: ‘device’ was not declared in this scope 2023-05-06T12:17:30.1407693Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1407972Z ^~~~~~ 2023-05-06T12:17:30.1408447Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:33: note: suggested alternatives: 2023-05-06T12:17:30.1409500Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T12:17:30.1410106Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T12:17:30.1410660Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1411263Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1411993Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1412615Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1413206Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1413799Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1414415Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1415025Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1415579Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1416122Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1416919Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1417312Z inline TensorOptions device(Device device) { 2023-05-06T12:17:30.1417553Z ^~~~~~ 2023-05-06T12:17:30.1418045Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1418652Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1419232Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1419806Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1420387Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1421111Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1421806Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1422470Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1423122Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1423769Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1424408Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1425019Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1425775Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1426348Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1426872Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1427686Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T12:17:30.1428060Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1428280Z ^ 2023-05-06T12:17:30.1428942Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T12:17:30.1429320Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1429556Z ^ 2023-05-06T12:17:30.1430108Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T12:17:30.1430473Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1430689Z ^ 2023-05-06T12:17:30.1431187Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T12:17:30.1431790Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T12:17:30.1432352Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1432906Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1433481Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1434092Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1434676Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1435266Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1435853Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1436463Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1442207Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1443135Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1444529Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1445178Z inline TensorOptions device(Device device) { 2023-05-06T12:17:30.1445591Z ^~~~~~ 2023-05-06T12:17:30.1448454Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1449463Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1450268Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1451161Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1452143Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1453483Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1454461Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1455545Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1456739Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1458064Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1458979Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1459873Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1460858Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1461868Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1462876Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1463819Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1464718Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1466159Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T12:17:30.1466758Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1467103Z ^ 2023-05-06T12:17:30.1468000Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T12:17:30.1468640Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1468989Z ^ 2023-05-06T12:17:30.1469879Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T12:17:30.1470515Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1470832Z ^ 2023-05-06T12:17:30.1471850Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:47: error: ‘type’ was not declared in this scope 2023-05-06T12:17:30.1472789Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1473236Z ^~~~ 2023-05-06T12:17:30.1474058Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:47: note: suggested alternatives: 2023-05-06T12:17:30.1475373Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/type_caster_base.h:12:0, 2023-05-06T12:17:30.1476437Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/cast.h:15, 2023-05-06T12:17:30.1477586Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/attr.h:14, 2023-05-06T12:17:30.1478528Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/class.h:12, 2023-05-06T12:17:30.1479422Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:13, 2023-05-06T12:17:30.1480727Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/Exceptions.h:14, 2023-05-06T12:17:30.1481748Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2023-05-06T12:17:30.1482729Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:6, 2023-05-06T12:17:30.1483643Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1484986Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pytypes.h:1412:7: note: ‘pybind11::type’ 2023-05-06T12:17:30.1485840Z class type : public object { 2023-05-06T12:17:30.1486291Z ^~~~ 2023-05-06T12:17:30.1487115Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/profiler/stubs/base.h:6:0, 2023-05-06T12:17:30.1488151Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler_kineto.h:8, 2023-05-06T12:17:30.1489127Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler.h:3, 2023-05-06T12:17:30.1490114Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:7, 2023-05-06T12:17:30.1491116Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, 2023-05-06T12:17:30.1492084Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, 2023-05-06T12:17:30.1492999Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16, 2023-05-06T12:17:30.1493891Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1494668Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1495896Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/util/strong_type.h:87:7: note: ‘strong::type’ 2023-05-06T12:17:30.1496602Z class type : public modifier>... 2023-05-06T12:17:30.1496953Z ^~~~ 2023-05-06T12:17:30.1497994Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:60: error: ‘pin_memory’ was not declared in this scope 2023-05-06T12:17:30.1498987Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1499457Z ^~~~~~~~~~ 2023-05-06T12:17:30.1500185Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:60: note: suggested alternatives: 2023-05-06T12:17:30.1501278Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1502195Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1503066Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1503905Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1504761Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1505712Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1507022Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1508052Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1509011Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1509945Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1511060Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1511962Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1512837Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1513672Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1514445Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1515653Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T12:17:30.1516274Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1516767Z ^ 2023-05-06T12:17:30.1517701Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T12:17:30.1518244Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1518596Z ^ 2023-05-06T12:17:30.1519359Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/MethodOperators.h:302:0, 2023-05-06T12:17:30.1520312Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:40, 2023-05-06T12:17:30.1521246Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1522183Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1523071Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1523926Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1524819Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1525685Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1526648Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1527521Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1528349Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1529130Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1530284Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_ops.h:17:18: note: ‘at::_ops::pin_memory’ 2023-05-06T12:17:30.1531051Z struct TORCH_API pin_memory { 2023-05-06T12:17:30.1531386Z ^~~~~~~~~~ 2023-05-06T12:17:30.1532126Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/NativeFunctions.h:945:0, 2023-05-06T12:17:30.1533002Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/TensorIndexing.h:13, 2023-05-06T12:17:30.1533824Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ATen.h:18, 2023-05-06T12:17:30.1534699Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, 2023-05-06T12:17:30.1535831Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, 2023-05-06T12:17:30.1536838Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, 2023-05-06T12:17:30.1537768Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4, 2023-05-06T12:17:30.1538799Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, 2023-05-06T12:17:30.1539714Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, 2023-05-06T12:17:30.1540589Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1541453Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1542271Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1543468Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_native.h:19:22: note: ‘at::native::pin_memory’ 2023-05-06T12:17:30.1544148Z TORCH_API at::Tensor pin_memory(const at::Tensor & self, c10::optional device=c10::nullopt); 2023-05-06T12:17:30.1544606Z ^~~~~~~~~~ 2023-05-06T12:17:30.1545319Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1546288Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1547159Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1548033Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1548926Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1549945Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1550946Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1551873Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1552765Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1553737Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1554725Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1556215Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1557363Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1558275Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1559092Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1560632Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T12:17:30.1561199Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1561517Z ^ 2023-05-06T12:17:30.1562340Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T12:17:30.1562892Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1563203Z ^ 2023-05-06T12:17:30.1564156Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:71: error: ‘False’ was not declared in this scope 2023-05-06T12:17:30.1565093Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1565539Z ^~~~~ 2023-05-06T12:17:30.1566600Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:71: note: suggested alternative: ‘pause’ 2023-05-06T12:17:30.1567569Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1567995Z ^~~~~ 2023-05-06T12:17:30.1568373Z pause 2023-05-06T12:17:30.1569303Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:35:17: error: unable to deduce ‘auto’ from ‘buf0’ 2023-05-06T12:17:30.1569942Z auto buf1 = buf0; 2023-05-06T12:17:30.1570237Z ^~~~ 2023-05-06T12:17:30.1570637Z ninja: build stopped: subcommand failed. 2023-05-06T12:17:30.1570871Z 2023-05-06T12:17:30.1570896Z 2023-05-06T12:17:30.1570903Z 2023-05-06T12:17:30.1571140Z You can suppress this exception and fall back to eager by setting: 2023-05-06T12:17:30.1571576Z import torch._dynamo 2023-05-06T12:17:30.1571994Z torch._dynamo.config.suppress_errors = True 2023-05-06T12:17:30.1572421Z Traceback (most recent call last): 2023-05-06T12:17:30.1572951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T12:17:30.1573511Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T12:17:30.1574299Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T12:17:30.1574761Z return fn(*args, **kwargs) 2023-05-06T12:17:30.1575236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T12:17:30.1575764Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T12:17:30.1576342Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T12:17:30.1576807Z return mod(*inputs) 2023-05-06T12:17:30.1577533Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:17:30.1578076Z return self._call_impl(*args, **kwargs) 2023-05-06T12:17:30.1578789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:17:30.1579579Z return forward_call(*args, **kwargs) 2023-05-06T12:17:30.1580301Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T12:17:30.1580796Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T12:17:30.1581556Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T12:17:30.1582112Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T12:17:30.1582862Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T12:17:30.1583384Z return self._call_impl(*args, **kwargs) 2023-05-06T12:17:30.1584266Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T12:17:30.1584782Z return forward_call(*args, **kwargs) 2023-05-06T12:17:30.1585319Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T12:17:30.1585899Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T12:17:30.1586571Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T12:17:30.1587139Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T12:17:30.1587918Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T12:17:30.1588414Z return func(*args, **kwargs) 2023-05-06T12:17:30.1588944Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 76, in _batch_shuffle_ddp 2023-05-06T12:17:30.1589455Z x_gather = concat_all_gather(x) 2023-05-06T12:17:30.1590161Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T12:17:30.1590732Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T12:17:30.1591525Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T12:17:30.1592075Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T12:17:30.1592815Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T12:17:30.1593284Z return fn(*args, **kwargs) 2023-05-06T12:17:30.1593993Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T12:17:30.1594500Z return _compile( 2023-05-06T12:17:30.1595156Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:17:30.1595605Z r = func(*args, **kwargs) 2023-05-06T12:17:30.1596299Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T12:17:30.1597505Z out_code = transform_code_object(code, transform) 2023-05-06T12:17:30.1598346Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T12:17:30.1598920Z transformations(instructions, code_options) 2023-05-06T12:17:30.1599666Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T12:17:30.1600130Z tracer.run() 2023-05-06T12:17:30.1600807Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T12:17:30.1601265Z super().run() 2023-05-06T12:17:30.1601956Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T12:17:30.1602436Z and self.step() 2023-05-06T12:17:30.1603110Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T12:17:30.1603862Z getattr(self, inst.opname)(inst) 2023-05-06T12:17:30.1604586Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T12:17:30.1605151Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T12:17:30.1605917Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T12:17:30.1606583Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T12:17:30.1607171Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T12:17:30.1607615Z return func(*args, **kwds) 2023-05-06T12:17:30.1608553Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T12:17:30.1609134Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T12:17:30.1609920Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:17:30.1610349Z r = func(*args, **kwargs) 2023-05-06T12:17:30.1611041Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T12:17:30.1611611Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T12:17:30.1612380Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T12:17:30.1612922Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T12:17:30.1613687Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T12:17:30.1614247Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T12:17:30.1615025Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T12:17:30.1615592Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T12:17:30.1616434Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T12:17:30.1616936Z return compile_fx(*args, **kwargs) 2023-05-06T12:17:30.1617676Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T12:17:30.1618199Z return compile_fx_with_cpp_wrapper( 2023-05-06T12:17:30.1619005Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 533, in compile_fx_with_cpp_wrapper 2023-05-06T12:17:30.1619522Z return compile_fx( 2023-05-06T12:17:30.1620261Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T12:17:30.1620757Z return aot_autograd( 2023-05-06T12:17:30.1621480Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T12:17:30.1622048Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T12:17:30.1622842Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T12:17:30.1623445Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T12:17:30.1624193Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:17:30.1624660Z r = func(*args, **kwargs) 2023-05-06T12:17:30.1625445Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T12:17:30.1626215Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:17:30.1627029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T12:17:30.1627886Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:17:30.1628704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T12:17:30.1629299Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T12:17:30.1630147Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T12:17:30.1630754Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T12:17:30.1631658Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:17:30.1632135Z r = func(*args, **kwargs) 2023-05-06T12:17:30.1632898Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T12:17:30.1633427Z return inner_compile( 2023-05-06T12:17:30.1634163Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T12:17:30.1634713Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T12:17:30.1635470Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T12:17:30.1636056Z return fn(*args, **kwargs) 2023-05-06T12:17:30.1636501Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T12:17:30.1637117Z return func(*args, **kwds) 2023-05-06T12:17:30.1637900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T12:17:30.1638416Z compiled_fn = graph.compile_to_fn() 2023-05-06T12:17:30.1639154Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T12:17:30.1639689Z return self.compile_to_module().call 2023-05-06T12:17:30.1640409Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T12:17:30.1640889Z r = func(*args, **kwargs) 2023-05-06T12:17:30.1641614Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 695, in compile_to_module 2023-05-06T12:17:30.1642176Z mod = PyCodeCache.load(code, linemap=linemap) 2023-05-06T12:17:30.1642923Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 706, in load 2023-05-06T12:17:30.1643485Z return cls.load_by_key_path(key, path, linemap) 2023-05-06T12:17:30.1644278Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 721, in load_by_key_path 2023-05-06T12:17:30.1644798Z exec(code, mod.__dict__, mod.__dict__) 2023-05-06T12:17:30.1645407Z File "/tmp/torchinductor_jenkins/4e/c4e34vfnet7u6rk5jlnnofw2yh7t5g5ddkbgvsifbzq52g2aeurt.py", line 56, in 2023-05-06T12:17:30.1646067Z module = load_inline( 2023-05-06T12:17:30.1646850Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1433, in load_inline 2023-05-06T12:17:30.1647369Z return _jit_compile( 2023-05-06T12:17:30.1648053Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile 2023-05-06T12:17:30.1648612Z _write_ninja_file_and_build_library( 2023-05-06T12:17:30.1649477Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1623, in _write_ninja_file_and_build_library 2023-05-06T12:17:30.1650034Z _run_ninja_build( 2023-05-06T12:17:30.1650765Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1910, in _run_ninja_build 2023-05-06T12:17:30.1651279Z raise RuntimeError(message) from e 2023-05-06T12:17:30.1652160Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T12:17:30.1657456Z RuntimeError: Error building extension 'inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp -o main.o 2023-05-06T12:17:30.1660596Z FAILED: main.o 2023-05-06T12:17:30.1665153Z c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp -o main.o 2023-05-06T12:17:30.1668872Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:52: warning: multi-character character constant [-Wmultichar] 2023-05-06T12:17:30.1669849Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1670265Z ^~~~~ 2023-05-06T12:17:30.1671561Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T12:17:30.1673008Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:33: error: ‘device’ was not declared in this scope 2023-05-06T12:17:30.1673884Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1674262Z ^~~~~~ 2023-05-06T12:17:30.1675196Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:33: note: suggested alternatives: 2023-05-06T12:17:30.1676262Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T12:17:30.1713381Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T12:17:30.1714193Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1714949Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1716124Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1717154Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1718031Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1718884Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1719754Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1720625Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1721448Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1722217Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1723921Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1724460Z inline TensorOptions device(Device device) { 2023-05-06T12:17:30.1724813Z ^~~~~~ 2023-05-06T12:17:30.1725510Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1726447Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1727308Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1728158Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1728998Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1729907Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1730849Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1731788Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1732706Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1733652Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1734523Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1735637Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1736521Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1737304Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1738014Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1739104Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T12:17:30.1739767Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1740058Z ^ 2023-05-06T12:17:30.1740851Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T12:17:30.1741369Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1741681Z ^ 2023-05-06T12:17:30.1742414Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T12:17:30.1742897Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1743205Z ^ 2023-05-06T12:17:30.1743809Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T12:17:30.1744630Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T12:17:30.1745385Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1746220Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1747111Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1748032Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1748930Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1749796Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1750720Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1751639Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1752486Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1753273Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1754491Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1755252Z inline TensorOptions device(Device device) { 2023-05-06T12:17:30.1755633Z ^~~~~~ 2023-05-06T12:17:30.1756507Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1757811Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T12:17:30.1758723Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1759600Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1760722Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1761554Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1762393Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1763258Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1764165Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1765310Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1766384Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1767408Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1768401Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1769370Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1770386Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1771343Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1772198Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1773484Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T12:17:30.1773995Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1774272Z ^ 2023-05-06T12:17:30.1775040Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T12:17:30.1775614Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1776038Z ^ 2023-05-06T12:17:30.1776964Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T12:17:30.1777534Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1777878Z ^ 2023-05-06T12:17:30.1778879Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:47: error: ‘type’ was not declared in this scope 2023-05-06T12:17:30.1779857Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1780294Z ^~~~ 2023-05-06T12:17:30.1780993Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:47: note: suggested alternatives: 2023-05-06T12:17:30.1782093Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/type_caster_base.h:12:0, 2023-05-06T12:17:30.1782967Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/cast.h:15, 2023-05-06T12:17:30.1783783Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/attr.h:14, 2023-05-06T12:17:30.1784631Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/class.h:12, 2023-05-06T12:17:30.1785730Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:13, 2023-05-06T12:17:30.1786728Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/Exceptions.h:14, 2023-05-06T12:17:30.1787668Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2023-05-06T12:17:30.1788610Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:6, 2023-05-06T12:17:30.1789378Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1790909Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pytypes.h:1412:7: note: ‘pybind11::type’ 2023-05-06T12:17:30.1791516Z class type : public object { 2023-05-06T12:17:30.1791850Z ^~~~ 2023-05-06T12:17:30.1792718Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/profiler/stubs/base.h:6:0, 2023-05-06T12:17:30.1793792Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler_kineto.h:8, 2023-05-06T12:17:30.1794781Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler.h:3, 2023-05-06T12:17:30.1795793Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:7, 2023-05-06T12:17:30.1797108Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, 2023-05-06T12:17:30.1798175Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, 2023-05-06T12:17:30.1798841Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16, 2023-05-06T12:17:30.1799405Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1799942Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1800743Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/util/strong_type.h:87:7: note: ‘strong::type’ 2023-05-06T12:17:30.1801137Z class type : public modifier>... 2023-05-06T12:17:30.1801376Z ^~~~ 2023-05-06T12:17:30.1802065Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:60: error: ‘pin_memory’ was not declared in this scope 2023-05-06T12:17:30.1802714Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1803028Z ^~~~~~~~~~ 2023-05-06T12:17:30.1803510Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:60: note: suggested alternatives: 2023-05-06T12:17:30.1804218Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1804826Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1805400Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1806047Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1806629Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1807507Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1808161Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1808834Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1809491Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1810238Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1810895Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1811498Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1812099Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1812663Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1813185Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1813989Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T12:17:30.1814369Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1814593Z ^ 2023-05-06T12:17:30.1815148Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T12:17:30.1815527Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1815745Z ^ 2023-05-06T12:17:30.1816246Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/MethodOperators.h:302:0, 2023-05-06T12:17:30.1816854Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:40, 2023-05-06T12:17:30.1817415Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T12:17:30.1817954Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T12:17:30.1818534Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T12:17:30.1819132Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T12:17:30.1819733Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T12:17:30.1820320Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T12:17:30.1820914Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T12:17:30.1821525Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T12:17:30.1822086Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1822630Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1823395Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_ops.h:17:18: note: ‘at::_ops::pin_memory’ 2023-05-06T12:17:30.1823875Z struct TORCH_API pin_memory { 2023-05-06T12:17:30.1824108Z ^~~~~~~~~~ 2023-05-06T12:17:30.1824591Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/NativeFunctions.h:945:0, 2023-05-06T12:17:30.1825187Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/TensorIndexing.h:13, 2023-05-06T12:17:30.1825733Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ATen.h:18, 2023-05-06T12:17:30.1826353Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, 2023-05-06T12:17:30.1827093Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, 2023-05-06T12:17:30.1827766Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, 2023-05-06T12:17:30.1828444Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4, 2023-05-06T12:17:30.1829102Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, 2023-05-06T12:17:30.1829709Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, 2023-05-06T12:17:30.1830308Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1830881Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1831420Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1832212Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_native.h:19:22: note: ‘at::native::pin_memory’ 2023-05-06T12:17:30.1832689Z TORCH_API at::Tensor pin_memory(const at::Tensor & self, c10::optional device=c10::nullopt); 2023-05-06T12:17:30.1833005Z ^~~~~~~~~~ 2023-05-06T12:17:30.1833497Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T12:17:30.1834089Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T12:17:30.1834680Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T12:17:30.1835256Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T12:17:30.1835840Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T12:17:30.1836578Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T12:17:30.1837588Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T12:17:30.1838261Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T12:17:30.1838915Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T12:17:30.1839553Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T12:17:30.1840366Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T12:17:30.1840982Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T12:17:30.1841586Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T12:17:30.1842134Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T12:17:30.1842677Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:1: 2023-05-06T12:17:30.1843630Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T12:17:30.1844006Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1844224Z ^ 2023-05-06T12:17:30.1844800Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T12:17:30.1845174Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T12:17:30.1845380Z ^ 2023-05-06T12:17:30.1846091Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:71: error: ‘False’ was not declared in this scope 2023-05-06T12:17:30.1846733Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1847053Z ^~~~~ 2023-05-06T12:17:30.1847742Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:34:71: note: suggested alternative: ‘pause’ 2023-05-06T12:17:30.1848372Z auto buf0 = at::randperm(4, device=device(type='cpu'), pin_memory=False); 2023-05-06T12:17:30.1848697Z ^~~~~ 2023-05-06T12:17:30.1848965Z pause 2023-05-06T12:17:30.1849643Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_cup53eaxc6ql5loy4loahmxquovnavpimuq6wsco6ggww7qzj5d4/main.cpp:35:17: error: unable to deduce ‘auto’ from ‘buf0’ 2023-05-06T12:17:30.1850097Z auto buf1 = buf0; 2023-05-06T12:17:30.1850303Z ^~~~ 2023-05-06T12:17:30.1850533Z ninja: build stopped: subcommand failed. 2023-05-06T12:17:30.1850700Z 2023-05-06T12:17:30.1850705Z 2023-05-06T12:17:30.1850711Z 2023-05-06T12:17:30.1850877Z You can suppress this exception and fall back to eager by setting: 2023-05-06T12:17:30.1851165Z import torch._dynamo 2023-05-06T12:17:30.1851452Z torch._dynamo.config.suppress_errors = True 2023-05-06T12:17:30.1851632Z 2023-05-06T12:17:30.1851788Z TorchDynamo optimized model failed to run because of following error 2023-05-06T12:17:30.1852076Z fail_to_run 2023-05-06T12:18:12.0051556Z cuda eval nvidia_deeprecommender pass 2023-05-06T12:18:16.8943664Z cuda eval opacus_cifar10 [2023-05-06 12:18:16,893] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T12:18:16.8945517Z [2023-05-06 12:18:16,893] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T12:23:54.9299699Z pass 2023-05-06T12:24:50.8059210Z cuda eval phlippe_densenet pass 2023-05-06T12:25:33.1927217Z cuda eval phlippe_resnet pass 2023-05-06T12:25:34.2671291Z accuracy pass_rate=76.00% 2023-05-06T12:25:34.2671946Z calls_captured gmean=0.00x mean=424.120x 2023-05-06T12:25:34.2674360Z unique_graphs gmean=0.00x mean=9.160x 2023-05-06T12:25:34.2675877Z graph_breaks gmean=0.00x mean=8.160x 2023-05-06T12:25:34.2678207Z unique_graph_breaks gmean=0.00x mean=1.040x 2023-05-06T12:25:34.8296171Z + [[ inductor_torchbench_perf == *max_autotune* ]] 2023-05-06T12:25:34.8298978Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --inference --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_no_cudagraphs_torchbench_amp_inference_cuda_performance.csv 2023-05-06T12:25:48.0122405Z cuda eval functorch_maml_omniglot 1.293x 2023-05-06T12:26:12.5976009Z cuda eval hf_Albert 1.961x 2023-05-06T12:26:49.0738950Z cuda eval hf_Bart 1.325x 2023-05-06T12:27:13.0356465Z cuda eval hf_Bert 1.611x 2023-05-06T12:27:51.9320700Z cuda eval hf_Bert_large 1.933x 2023-05-06T12:29:19.6132612Z cuda eval hf_BigBird 1.378x 2023-05-06T12:29:39.9977620Z cuda eval hf_DistilBert 1.395x 2023-05-06T12:30:11.6871282Z cuda eval hf_GPT2 1.707x 2023-05-06T12:31:15.2067920Z cuda eval hf_GPT2_large 2.297x 2023-05-06T12:32:37.4671783Z cuda eval hf_Longformer 1.329x 2023-05-06T12:33:01.1987681Z cuda eval hf_Reformer [2023-05-06 12:33:01,196] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:33:09.0821835Z [2023-05-06 12:33:09,080] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:33:12.1263486Z [2023-05-06 12:33:12,125] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:33:14.9537299Z 1.994x 2023-05-06T12:33:53.9613942Z cuda eval hf_T5 1.640x 2023-05-06T12:34:46.6660754Z cuda eval hf_T5_base 1.415x 2023-05-06T12:36:11.6268085Z cuda eval hf_T5_large 1.304x 2023-05-06T12:36:21.4236116Z cuda eval lennard_jones 0.813x 2023-05-06T12:36:42.1270846Z cuda eval llama 1.366x 2023-05-06T12:37:18.7866903Z cuda eval maml 0.625x 2023-05-06T12:37:28.2767471Z cuda eval maml_omniglot 1.431x 2023-05-06T12:37:49.0293046Z cuda eval mnasnet1_0 1.522x 2023-05-06T12:38:10.5005071Z cuda eval mobilenet_v2 1.542x 2023-05-06T12:38:14.6822154Z The eval test only supports CPU. 2023-05-06T12:38:14.6824708Z Traceback (most recent call last): 2023-05-06T12:38:14.6825268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T12:38:14.6829129Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T12:38:14.6829705Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T12:38:14.6830071Z benchmark = benchmark_cls( 2023-05-06T12:38:14.6830578Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T12:38:14.6830936Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T12:38:14.6831362Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T12:38:14.6831795Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T12:38:14.6832136Z NotImplementedError: The eval test only supports CPU. 2023-05-06T12:38:14.6834406Z 2023-05-06T12:38:14.6834823Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T12:38:38.0686770Z cuda eval mobilenet_v3_large 1.443x 2023-05-06T12:38:45.2519103Z cuda eval moco [2023-05-06 12:38:45,250] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T12:39:27.5279413Z [2023-05-06 12:39:27,525] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T12:39:27.5280726Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T12:39:27.5282552Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T12:39:28.0647016Z [2023-05-06 12:39:28,063] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T12:39:54.7241173Z 1.433x 2023-05-06T12:40:10.1301898Z cuda eval nvidia_deeprecommender 0.990x 2023-05-06T12:40:14.8820457Z cuda eval opacus_cifar10 [2023-05-06 12:40:14,880] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T12:40:14.8822572Z [2023-05-06 12:40:14,880] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T12:40:31.1383657Z 0.649x 2023-05-06T12:40:53.2585198Z cuda eval phlippe_densenet 1.625x 2023-05-06T12:41:07.1335021Z cuda eval phlippe_resnet 1.707x 2023-05-06T12:41:08.1216124Z speedup gmean=1.39x mean=1.451x 2023-05-06T12:41:08.1216582Z abs_latency gmean=10.95x mean=25.983x 2023-05-06T12:41:08.1221007Z compilation_latency mean=25.589 seconds 2023-05-06T12:41:08.1221678Z compression_ratio mean=0.936x 2023-05-06T12:41:08.1222157Z eager_peak_mem gmean=0.59x mean=1.317x 2023-05-06T12:41:08.1222713Z dynamo_peak_mem gmean=0.72x mean=1.338x 2023-05-06T12:41:08.1223159Z calls_captured gmean=256.65x mean=697.038x 2023-05-06T12:41:08.1223811Z unique_graphs gmean=3.70x mean=18.346x 2023-05-06T12:41:08.1226605Z graph_breaks gmean=0.00x mean=9.500x 2023-05-06T12:41:08.1228507Z unique_graph_breaks gmean=0.00x mean=1.462x 2023-05-06T12:41:08.6896724Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --inference --amp --backend inductor --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_with_cudagraphs_torchbench_amp_inference_cuda_performance.csv 2023-05-06T12:41:21.8571324Z cuda eval functorch_maml_omniglot 3.561x 2023-05-06T12:41:46.4867818Z cuda eval hf_Albert 2.011x 2023-05-06T12:42:25.6531880Z cuda eval hf_Bart 1.440x 2023-05-06T12:42:50.0455548Z cuda eval hf_Bert 1.666x 2023-05-06T12:43:29.2839115Z cuda eval hf_Bert_large 2.148x 2023-05-06T12:45:15.9729803Z cuda eval hf_BigBird 1.905x 2023-05-06T12:45:37.2730315Z cuda eval hf_DistilBert 1.417x 2023-05-06T12:46:09.4474051Z cuda eval hf_GPT2 1.766x 2023-05-06T12:47:13.4404686Z cuda eval hf_GPT2_large 2.307x 2023-05-06T12:48:36.4980005Z cuda eval hf_Longformer 1.337x 2023-05-06T12:48:57.9462206Z cuda eval hf_Reformer [2023-05-06 12:48:57,944] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:00.0329563Z [2023-05-06 12:49:00,032] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:49:04.5117238Z [2023-05-06 12:49:04,510] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:06.9484290Z [2023-05-06 12:49:06,947] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:07.9535324Z [2023-05-06 12:49:07,952] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:49:08.7384105Z [2023-05-06 12:49:08,737] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:09.9365074Z [2023-05-06 12:49:09,935] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:10.9395946Z [2023-05-06 12:49:10,938] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T12:49:11.9403596Z [2023-05-06 12:49:11,939] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T12:49:19.6635908Z 1.916x 2023-05-06T12:50:06.1050119Z cuda eval hf_T5 1.689x 2023-05-06T12:51:15.1856096Z cuda eval hf_T5_base 1.537x 2023-05-06T12:53:23.5757532Z cuda eval hf_T5_large 1.982x 2023-05-06T12:53:33.4114628Z cuda eval lennard_jones 1.912x 2023-05-06T12:53:50.5301523Z cuda eval llama [2023-05-06 12:53:50,528] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T12:53:53.9669641Z 1.376x 2023-05-06T13:22:42.6422290Z cuda eval maml 0.001x 2023-05-06T13:22:54.4611638Z cuda eval maml_omniglot 3.447x 2023-05-06T13:23:15.5531656Z cuda eval mnasnet1_0 1.521x 2023-05-06T13:23:37.4469377Z cuda eval mobilenet_v2 1.529x 2023-05-06T13:23:41.6148569Z The eval test only supports CPU. 2023-05-06T13:23:41.6150362Z Traceback (most recent call last): 2023-05-06T13:23:41.6150827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T13:23:41.6151195Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T13:23:41.6151587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T13:23:41.6154184Z benchmark = benchmark_cls( 2023-05-06T13:23:41.6154546Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T13:23:41.6155002Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T13:23:41.6155639Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T13:23:41.6156121Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T13:23:41.6158196Z NotImplementedError: The eval test only supports CPU. 2023-05-06T13:23:41.6158567Z 2023-05-06T13:23:41.6158727Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T13:24:05.1969343Z cuda eval mobilenet_v3_large 1.449x 2023-05-06T13:24:12.2978195Z cuda eval moco [2023-05-06 13:24:12,296] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T13:24:25.2307919Z [2023-05-06 13:24:25,228] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:24:54.2490769Z [2023-05-06 13:24:54,246] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T13:24:54.2491706Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T13:24:54.2492658Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T13:24:54.7795201Z [2023-05-06 13:24:54,778] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T13:24:54.7830132Z [2023-05-06 13:24:54,782] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T13:25:05.3725271Z [2023-05-06 13:25:05,371] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:13.9046410Z [2023-05-06 13:25:13,903] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:15.9350461Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:25:15.9352727Z Traceback (most recent call last): 2023-05-06T13:25:15.9354019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:25:15.9354371Z fn(model, example_inputs) 2023-05-06T13:25:15.9356178Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:25:15.9356932Z return fn(*args, **kwargs) 2023-05-06T13:25:15.9357280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:25:15.9357827Z return mod(*inputs) 2023-05-06T13:25:15.9358977Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:25:15.9359531Z return self._call_impl(*args, **kwargs) 2023-05-06T13:25:15.9360035Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:25:15.9360916Z return forward_call(*args, **kwargs) 2023-05-06T13:25:15.9361528Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T13:25:15.9362197Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T13:25:15.9363152Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T13:25:15.9363972Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T13:25:15.9364699Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:25:15.9365060Z return self._call_impl(*args, **kwargs) 2023-05-06T13:25:15.9365558Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:25:15.9365984Z return forward_call(*args, **kwargs) 2023-05-06T13:25:15.9366358Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T13:25:15.9366755Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T13:25:15.9367174Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T13:25:15.9367576Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T13:25:15.9367968Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T13:25:15.9368360Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T13:25:15.9368869Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:25:15.9369201Z return fn(*args, **kwargs) 2023-05-06T13:25:15.9369665Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T13:25:15.9370024Z return fn(*args, **kwargs) 2023-05-06T13:25:15.9370518Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3348, in forward 2023-05-06T13:25:15.9370867Z return compiled_fn(full_args) 2023-05-06T13:25:15.9371340Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1260, in g 2023-05-06T13:25:15.9372025Z return f(*args) 2023-05-06T13:25:15.9372525Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2212, in runtime_wrapper 2023-05-06T13:25:15.9372871Z all_outs = call_func_with_args( 2023-05-06T13:25:15.9373393Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1285, in call_func_with_args 2023-05-06T13:25:15.9373757Z out = normalize_as_list(f(args)) 2023-05-06T13:25:15.9374365Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1372, in rng_functionalization_wrapper 2023-05-06T13:25:15.9374726Z return compiled_fw(args) 2023-05-06T13:25:15.9375462Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 331, in run 2023-05-06T13:25:15.9375842Z return model(new_inputs) 2023-05-06T13:25:15.9376304Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 373, in run 2023-05-06T13:25:15.9376654Z return compiled_fn(new_inputs) 2023-05-06T13:25:15.9377193Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 330, in deferred_cudagraphify 2023-05-06T13:25:15.9377538Z return fn(inputs) 2023-05-06T13:25:15.9378016Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1624, in run 2023-05-06T13:25:15.9378384Z out = self._run(new_inputs, function_id) 2023-05-06T13:25:15.9378891Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1677, in _run 2023-05-06T13:25:15.9379245Z return self.execute_node(child, new_inputs) 2023-05-06T13:25:15.9379781Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 1747, in execute_node 2023-05-06T13:25:15.9380133Z return node.run(new_inputs) 2023-05-06T13:25:15.9380615Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/cudagraph_trees.py", line 861, in run 2023-05-06T13:25:15.9380989Z assert data_ptr == new_inputs[idx].data_ptr() 2023-05-06T13:25:15.9381238Z AssertionError 2023-05-06T13:25:18.0414262Z ERROR 2023-05-06T13:25:30.4291526Z cuda eval nvidia_deeprecommender 0.868x 2023-05-06T13:25:35.1467962Z cuda eval opacus_cifar10 [2023-05-06 13:25:35,145] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T13:25:35.1469920Z [2023-05-06 13:25:35,145] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T13:25:39.7254016Z [2023-05-06 13:25:39,724] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:40.9598963Z [2023-05-06 13:25:40,959] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:42.1968629Z [2023-05-06 13:25:42,195] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:42.2455200Z [2023-05-06 13:25:42,244] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:42.2955089Z [2023-05-06 13:25:42,294] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:43.3753534Z [2023-05-06 13:25:43,374] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:44.4573464Z [2023-05-06 13:25:44,456] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:44.5053532Z [2023-05-06 13:25:44,504] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:45.4610481Z [2023-05-06 13:25:45,460] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:46.5369602Z [2023-05-06 13:25:46,536] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:47.6257563Z [2023-05-06 13:25:47,624] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:47.6743991Z [2023-05-06 13:25:47,673] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:48.6589223Z [2023-05-06 13:25:48,658] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:49.7655455Z [2023-05-06 13:25:49,764] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:50.8237887Z [2023-05-06 13:25:50,822] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:50.8697231Z [2023-05-06 13:25:50,869] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:50.9285400Z [2023-05-06 13:25:50,927] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T13:25:51.7287870Z 0.635x 2023-05-06T13:26:13.3359640Z cuda eval phlippe_densenet 2.273x 2023-05-06T13:26:27.2493833Z cuda eval phlippe_resnet 1.687x 2023-05-06T13:26:28.2265139Z abs_latency gmean=0.00x mean=2051.611x 2023-05-06T13:26:28.2265564Z compilation_latency mean=31.380 seconds 2023-05-06T13:26:28.2265837Z compression_ratio mean=0.928x 2023-05-06T13:26:28.2269500Z eager_peak_mem gmean=0.00x mean=1.260x 2023-05-06T13:26:28.2270427Z dynamo_peak_mem gmean=0.00x mean=1.251x 2023-05-06T13:26:28.2273270Z calls_captured gmean=0.00x mean=669.115x 2023-05-06T13:26:28.2276100Z unique_graphs gmean=0.00x mean=15.231x 2023-05-06T13:26:28.2279455Z graph_breaks gmean=0.00x mean=6.577x 2023-05-06T13:26:28.2281881Z unique_graph_breaks gmean=0.00x mean=1.308x 2023-05-06T13:26:28.7939194Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --inference --amp --backend inductor --dynamic-shapes --dynamic-batch-only --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_dynamic_torchbench_amp_inference_cuda_performance.csv 2023-05-06T13:26:41.7852189Z cuda eval functorch_maml_omniglot 1.297x 2023-05-06T13:27:16.9164829Z cuda eval hf_Albert 1.972x 2023-05-06T13:27:57.0695960Z cuda eval hf_Bart ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:27:57.0696351Z Traceback (most recent call last): 2023-05-06T13:27:57.0696708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:27:57.0697037Z fn(model, example_inputs) 2023-05-06T13:27:57.0698473Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:27:57.0698828Z return fn(*args, **kwargs) 2023-05-06T13:27:57.0699156Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:27:57.0699476Z return mod(*inputs) 2023-05-06T13:27:57.0699984Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:27:57.0700373Z return self._call_impl(*args, **kwargs) 2023-05-06T13:27:57.0700937Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:27:57.0701298Z return forward_call(*args, **kwargs) 2023-05-06T13:27:57.0701717Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/framework/huggingface/model_factory.py", line 44, in forward 2023-05-06T13:27:57.0702884Z return self.model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) 2023-05-06T13:27:57.0703491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:27:57.0703900Z return self._call_impl(*args, **kwargs) 2023-05-06T13:27:57.0704407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:27:57.0704761Z return forward_call(*args, **kwargs) 2023-05-06T13:27:57.0705275Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1373, in forward 2023-05-06T13:27:57.0705633Z outputs = self.model( 2023-05-06T13:27:57.0706319Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:27:57.0706678Z return self._call_impl(*args, **kwargs) 2023-05-06T13:27:57.0707190Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:27:57.0707549Z return forward_call(*args, **kwargs) 2023-05-06T13:27:57.0708062Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1237, in forward 2023-05-06T13:27:57.0708432Z encoder_outputs = self.encoder( 2023-05-06T13:27:57.0708938Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T13:27:57.0709331Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:27:57.0709850Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:27:57.0710259Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:27:57.0710840Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:27:57.0711185Z return fn(*args, **kwargs) 2023-05-06T13:27:57.0711684Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:27:57.0712032Z return _compile( 2023-05-06T13:27:57.0712505Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:27:57.0712829Z r = func(*args, **kwargs) 2023-05-06T13:27:57.0713307Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 470, in _compile 2023-05-06T13:27:57.0713669Z check_fn = CheckFunctionManager( 2023-05-06T13:27:57.0714150Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 747, in __init__ 2023-05-06T13:27:57.0714518Z guard.create(local_builder, global_builder) 2023-05-06T13:27:57.0715007Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_guards.py", line 196, in create 2023-05-06T13:27:57.0715419Z return self.create_fn(self.source.select(local_builder, global_builder), self) 2023-05-06T13:27:57.0715952Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 516, in SHAPE_ENV 2023-05-06T13:27:57.0716329Z guards = output_graph.shape_env.produce_guards( 2023-05-06T13:27:57.0717067Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2400, in produce_guards 2023-05-06T13:27:57.0717513Z raise ConstraintViolationError(f"Constraints violated!\n{err}") 2023-05-06T13:27:57.0717937Z torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated! 2023-05-06T13:27:57.0718784Z 1. Could not validate constraint UnspecConstraint(L['decoder_input_ids'].size()[0]) as L['decoder_input_ids'].size()[0] is actually a non-atomic symbolic expression 8. Did you really mean to mark this dimension as dynamic? 2023-05-06T13:27:57.0719161Z 2023-05-06T13:27:57.0719316Z 2023-05-06T13:27:57.0719485Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:27:57.0719778Z import torch._dynamo 2023-05-06T13:27:57.0720041Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:27:57.0720222Z 2023-05-06T13:27:58.4809027Z ERROR 2023-05-06T13:28:30.9422729Z cuda eval hf_Bert 1.595x 2023-05-06T13:29:29.0290307Z cuda eval hf_Bert_large 1.945x 2023-05-06T13:29:45.9559479Z cuda eval hf_BigBird ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:29:45.9560575Z Traceback (most recent call last): 2023-05-06T13:29:45.9561216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:29:45.9561777Z fn(model, example_inputs) 2023-05-06T13:29:45.9564059Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:29:45.9564665Z return fn(*args, **kwargs) 2023-05-06T13:29:45.9565254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:29:45.9565816Z return mod(*inputs) 2023-05-06T13:29:45.9566786Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9567491Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9568381Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9568987Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9570011Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2455, in forward 2023-05-06T13:29:45.9570679Z outputs = self.bert( 2023-05-06T13:29:45.9571566Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9572268Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9573176Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9573778Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9574756Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2103, in forward 2023-05-06T13:29:45.9575394Z to_mask = None 2023-05-06T13:29:45.9576274Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9576909Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9577772Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9578424Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9579392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1632, in forward 2023-05-06T13:29:45.9580144Z layer_outputs = layer_module( 2023-05-06T13:29:45.9581077Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9581722Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9582616Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9583250Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9584225Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1484, in forward 2023-05-06T13:29:45.9584916Z self_attention_outputs = self.attention( 2023-05-06T13:29:45.9585904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9586547Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9587809Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9588415Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9589402Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1397, in forward 2023-05-06T13:29:45.9590144Z self_outputs = self.self( 2023-05-06T13:29:45.9591029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:29:45.9591798Z return self._call_impl(*args, **kwargs) 2023-05-06T13:29:45.9592709Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:29:45.9593580Z return forward_call(*args, **kwargs) 2023-05-06T13:29:45.9594581Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 470, in forward 2023-05-06T13:29:45.9595365Z context_layer, attention_probs = self.bigbird_block_sparse_attention( 2023-05-06T13:29:45.9596358Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T13:29:45.9597262Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:29:45.9598205Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:29:45.9598895Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:29:45.9599810Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:29:45.9600324Z return fn(*args, **kwargs) 2023-05-06T13:29:45.9601166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:29:45.9601748Z return _compile( 2023-05-06T13:29:45.9602635Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9603203Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9604077Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T13:29:45.9604742Z out_code = transform_code_object(code, transform) 2023-05-06T13:29:45.9605813Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T13:29:45.9606561Z transformations(instructions, code_options) 2023-05-06T13:29:45.9607526Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T13:29:45.9608124Z tracer.run() 2023-05-06T13:29:45.9608979Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T13:29:45.9609575Z super().run() 2023-05-06T13:29:45.9610526Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:29:45.9611119Z and self.step() 2023-05-06T13:29:45.9611966Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:29:45.9612615Z getattr(self, inst.opname)(inst) 2023-05-06T13:29:45.9613540Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T13:29:45.9614238Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T13:29:45.9615224Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T13:29:45.9616002Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T13:29:45.9616677Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T13:29:45.9617566Z return func(*args, **kwds) 2023-05-06T13:29:45.9618582Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T13:29:45.9619214Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T13:29:45.9620041Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9620545Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9621918Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T13:29:45.9622562Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T13:29:45.9623626Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T13:29:45.9624224Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T13:29:45.9625033Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T13:29:45.9625593Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T13:29:45.9626359Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T13:29:45.9626877Z return compile_fx(*args, **kwargs) 2023-05-06T13:29:45.9627611Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T13:29:45.9628078Z return aot_autograd( 2023-05-06T13:29:45.9628776Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T13:29:45.9629325Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T13:29:45.9630147Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T13:29:45.9630673Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T13:29:45.9631362Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9631803Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9632518Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T13:29:45.9633137Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:29:45.9633933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T13:29:45.9634538Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:29:45.9635417Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T13:29:45.9636039Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:29:45.9637029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T13:29:45.9637597Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T13:29:45.9638339Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9638803Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9639503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T13:29:45.9640020Z return inner_compile( 2023-05-06T13:29:45.9640725Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T13:29:45.9641268Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T13:29:45.9641966Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T13:29:45.9642679Z return fn(*args, **kwargs) 2023-05-06T13:29:45.9643119Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T13:29:45.9643522Z return func(*args, **kwds) 2023-05-06T13:29:45.9644243Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T13:29:45.9644769Z compiled_fn = graph.compile_to_fn() 2023-05-06T13:29:45.9645484Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T13:29:45.9645984Z return self.compile_to_module().call 2023-05-06T13:29:45.9646873Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9647316Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9647981Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T13:29:45.9648477Z code, linemap = self.codegen() 2023-05-06T13:29:45.9649137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 647, in codegen 2023-05-06T13:29:45.9649709Z return self.wrapper_code.generate() 2023-05-06T13:29:45.9650385Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:29:45.9650826Z r = func(*args, **kwargs) 2023-05-06T13:29:45.9651500Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 419, in generate 2023-05-06T13:29:45.9651994Z output_refs = self.get_output_refs() 2023-05-06T13:29:45.9652695Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 274, in wrapper 2023-05-06T13:29:45.9653165Z setattr(self, key, fn(self)) 2023-05-06T13:29:45.9653902Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in get_output_refs 2023-05-06T13:29:45.9654492Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T13:29:45.9655299Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in 2023-05-06T13:29:45.9655868Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T13:29:45.9656610Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 2142, in codegen_reference 2023-05-06T13:29:45.9657178Z expr = pexpr(V.graph.sizevars.simplify(self.shape)) 2023-05-06T13:29:45.9657936Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint 2023-05-06T13:29:45.9658449Z return self._str(self._print(expr)) 2023-05-06T13:29:45.9659127Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print 2023-05-06T13:29:45.9659674Z return printmethod(expr, **kwargs) 2023-05-06T13:29:45.9660401Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 191, in _print_Pow 2023-05-06T13:29:45.9660874Z assert exp == int(exp), exp 2023-05-06T13:29:45.9661450Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T13:29:45.9661961Z AssertionError: -1/2 2023-05-06T13:29:45.9662168Z 2023-05-06T13:29:45.9662177Z 2023-05-06T13:29:45.9662412Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:29:45.9662802Z import torch._dynamo 2023-05-06T13:29:45.9663193Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:29:45.9663447Z 2023-05-06T13:29:47.0347597Z ERROR 2023-05-06T13:30:11.3581379Z cuda eval hf_DistilBert 1.390x 2023-05-06T13:30:52.2351740Z cuda eval hf_GPT2 1.704x 2023-05-06T13:31:56.5911227Z cuda eval hf_GPT2_large 2.303x 2023-05-06T13:32:14.6155591Z cuda eval hf_Longformer [2023-05-06 13:32:14,613] torch._dynamo.variables.torch: [WARNING] Calling on only torch.SymInt arguments is not yet supported. 2023-05-06T13:32:14.6156561Z To support this behavior, we need to allow const-propping tensors that store symint data. 2023-05-06T13:32:14.6157308Z For now, dynamo will explicitly graph break when it encounters user code with this behavior. 2023-05-06T13:32:14.6157549Z 2023-05-06T13:32:20.1544253Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:32:20.1544590Z Traceback (most recent call last): 2023-05-06T13:32:20.1544929Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:32:20.1545252Z fn(model, example_inputs) 2023-05-06T13:32:20.1546480Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:32:20.1546847Z return fn(*args, **kwargs) 2023-05-06T13:32:20.1547207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:32:20.1547528Z return mod(*inputs) 2023-05-06T13:32:20.1548059Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:32:20.1548443Z return self._call_impl(*args, **kwargs) 2023-05-06T13:32:20.1548937Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:32:20.1549292Z return forward_call(*args, **kwargs) 2023-05-06T13:32:20.1549857Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T13:32:20.1550228Z outputs = self.longformer( 2023-05-06T13:32:20.1550836Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:32:20.1551872Z return self._call_impl(*args, **kwargs) 2023-05-06T13:32:20.1552595Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:32:20.1552971Z return forward_call(*args, **kwargs) 2023-05-06T13:32:20.1553747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T13:32:20.1554254Z encoder_outputs = self.encoder( 2023-05-06T13:32:20.1555115Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:32:20.1555742Z return self._call_impl(*args, **kwargs) 2023-05-06T13:32:20.1556751Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:32:20.1557400Z return forward_call(*args, **kwargs) 2023-05-06T13:32:20.1558339Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T13:32:20.1559180Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T13:32:20.1560253Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1326, in 2023-05-06T13:32:20.1560995Z layer_outputs = layer_module( 2023-05-06T13:32:20.1561859Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:32:20.1562547Z return self._call_impl(*args, **kwargs) 2023-05-06T13:32:20.1563416Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:32:20.1563932Z return forward_call(*args, **kwargs) 2023-05-06T13:32:20.1564799Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T13:32:20.1565465Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:32:20.1566898Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:32:20.1567536Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:32:20.1568446Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:32:20.1569078Z return fn(*args, **kwargs) 2023-05-06T13:32:20.1570035Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:32:20.1570755Z return _compile( 2023-05-06T13:32:20.1571558Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:32:20.1572151Z r = func(*args, **kwargs) 2023-05-06T13:32:20.1572655Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T13:32:20.1573032Z out_code = transform_code_object(code, transform) 2023-05-06T13:32:20.1573612Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T13:32:20.1574028Z transformations(instructions, code_options) 2023-05-06T13:32:20.1574539Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T13:32:20.1574871Z tracer.run() 2023-05-06T13:32:20.1575364Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T13:32:20.1575692Z super().run() 2023-05-06T13:32:20.1576152Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:32:20.1576483Z and self.step() 2023-05-06T13:32:20.1576958Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:32:20.1577303Z getattr(self, inst.opname)(inst) 2023-05-06T13:32:20.1577822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T13:32:20.1578189Z self.output.compile_subgraph( 2023-05-06T13:32:20.1578700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T13:32:20.1579106Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T13:32:20.1579477Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T13:32:20.1579780Z return func(*args, **kwds) 2023-05-06T13:32:20.1580342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T13:32:20.1580739Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T13:32:20.1581241Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:32:20.1581581Z r = func(*args, **kwargs) 2023-05-06T13:32:20.1582070Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T13:32:20.1582502Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T13:32:20.1583072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T13:32:20.1583458Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T13:32:20.1584001Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T13:32:20.1584387Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T13:32:20.1584912Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T13:32:20.1585520Z return compile_fx(*args, **kwargs) 2023-05-06T13:32:20.1586036Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T13:32:20.1586376Z return aot_autograd( 2023-05-06T13:32:20.1586855Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T13:32:20.1587253Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T13:32:20.1587817Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T13:32:20.1588210Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T13:32:20.1588810Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:32:20.1589150Z r = func(*args, **kwargs) 2023-05-06T13:32:20.1589693Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T13:32:20.1590160Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:32:20.1590778Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T13:32:20.1591217Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:32:20.1591823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T13:32:20.1592248Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T13:32:20.1592834Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T13:32:20.1593241Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T13:32:20.1593762Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:32:20.1594077Z r = func(*args, **kwargs) 2023-05-06T13:32:20.1594572Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T13:32:20.1594914Z return inner_compile( 2023-05-06T13:32:20.1595411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T13:32:20.1595801Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T13:32:20.1596306Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T13:32:20.1596818Z return fn(*args, **kwargs) 2023-05-06T13:32:20.1597235Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T13:32:20.1597541Z return func(*args, **kwds) 2023-05-06T13:32:20.1598066Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner 2023-05-06T13:32:20.1598549Z graph.run(*example_inputs) 2023-05-06T13:32:20.1599297Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:32:20.1599777Z r = func(*args, **kwargs) 2023-05-06T13:32:20.1607554Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run 2023-05-06T13:32:20.1608067Z return super().run(*args) 2023-05-06T13:32:20.1608831Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T13:32:20.1609327Z self.env[node] = self.run_node(node) 2023-05-06T13:32:20.1610031Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 488, in run_node 2023-05-06T13:32:20.1610585Z result = super().run_node(n) 2023-05-06T13:32:20.1611267Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T13:32:20.1612093Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T13:32:20.1612846Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 392, in call_function 2023-05-06T13:32:20.1613430Z raise LoweringException(e, target, args, kwargs).with_traceback( 2023-05-06T13:32:20.1614215Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 389, in call_function 2023-05-06T13:32:20.1614703Z out = lowerings[target](*args, **kwargs) 2023-05-06T13:32:20.1615433Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 228, in wrapped 2023-05-06T13:32:20.1616103Z out = decomp_fn(*args, **kwargs) 2023-05-06T13:32:20.1616808Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 4036, in sym_stride 2023-05-06T13:32:20.1617307Z return a.get_stride()[dim] 2023-05-06T13:32:20.1617994Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3823, in __getattr__ 2023-05-06T13:32:20.1618474Z fn = getattr(self.data, name) 2023-05-06T13:32:20.1619045Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T13:32:20.1619786Z LoweringException: AttributeError: 'View' object has no attribute 'get_stride' 2023-05-06T13:32:20.1620200Z target: aten.sym_stride 2023-05-06T13:32:20.1620558Z args[0]: TensorBox( 2023-05-06T13:32:20.1620837Z View( 2023-05-06T13:32:20.1621087Z View( 2023-05-06T13:32:20.1621413Z PermuteView(data=PermuteView(data=View( 2023-05-06T13:32:20.1621755Z StorageBox( 2023-05-06T13:32:20.1622040Z Pointwise( 2023-05-06T13:32:20.1622390Z 'cuda', 2023-05-06T13:32:20.1622676Z torch.float16, 2023-05-06T13:32:20.1622985Z def inner_fn(index): 2023-05-06T13:32:20.1623283Z i0, i1, i2 = index 2023-05-06T13:32:20.1623659Z tmp0 = ops.load(buf1, i2 + 768 * i1 + 768 * i0 * s0) 2023-05-06T13:32:20.1624037Z tmp1 = ops.load(arg1_1, i2) 2023-05-06T13:32:20.1624386Z tmp2 = tmp0 + tmp1 2023-05-06T13:32:20.1624756Z tmp3 = ops.constant(8.0, torch.float16) 2023-05-06T13:32:20.1625125Z tmp4 = tmp2 / tmp3 2023-05-06T13:32:20.1625457Z return tmp4 2023-05-06T13:32:20.1625733Z , 2023-05-06T13:32:20.1626034Z ranges=[4096, s0, 768], 2023-05-06T13:32:20.1626361Z origin_node=div, 2023-05-06T13:32:20.1626669Z origins={add, div} 2023-05-06T13:32:20.1626964Z ) 2023-05-06T13:32:20.1627229Z ), 2023-05-06T13:32:20.1627514Z size=(4096, s0, 12, 64), 2023-05-06T13:32:20.1627881Z reindex=lambda i0, i1, i2, i3: [i0, i1, 64*i2 + i3], 2023-05-06T13:32:20.1628260Z origins={add, div, view_6} 2023-05-06T13:32:20.1628603Z ), dims=[1, 0, 2, 3]), dims=[0, 2, 1, 3]), 2023-05-06T13:32:20.1628935Z size=(12*s0, 4096, 64), 2023-05-06T13:32:20.1629396Z reindex=lambda i0, i1, i2: [ModularIndexing(i0, 12, s0), ModularIndexing(i0, 1, 12), i1, i2], 2023-05-06T13:32:20.1629810Z origins={view_8} 2023-05-06T13:32:20.1630104Z ), 2023-05-06T13:32:20.1630433Z size=(12*s0, 8, 512, 64), 2023-05-06T13:32:20.1630781Z reindex=lambda i0, i1, i2, i3: [i0, 512*i1 + i2, i3], 2023-05-06T13:32:20.1631133Z origins={view_10} 2023-05-06T13:32:20.1631414Z ) 2023-05-06T13:32:20.1631649Z ) 2023-05-06T13:32:20.1631907Z args[1]: 1 2023-05-06T13:32:20.1632085Z 2023-05-06T13:32:20.1632094Z 2023-05-06T13:32:20.1632322Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:32:20.1632741Z import torch._dynamo 2023-05-06T13:32:20.1633111Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:32:20.1633370Z 2023-05-06T13:32:21.3159781Z ERROR 2023-05-06T13:32:47.9929312Z cuda eval hf_Reformer [2023-05-06 13:32:47,990] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T13:32:56.4312454Z [2023-05-06 13:32:56,429] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T13:33:00.2434250Z [2023-05-06 13:33:00,242] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T13:33:03.2284475Z 2.019x 2023-05-06T13:33:43.8596033Z cuda eval hf_T5 1.647x 2023-05-06T13:34:37.6201817Z cuda eval hf_T5_base 1.422x 2023-05-06T13:36:05.4748791Z cuda eval hf_T5_large 1.291x 2023-05-06T13:36:15.0881213Z cuda eval lennard_jones 0.779x 2023-05-06T13:36:41.9487615Z cuda eval llama ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:36:41.9488543Z Traceback (most recent call last): 2023-05-06T13:36:41.9489187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:36:41.9489748Z fn(model, example_inputs) 2023-05-06T13:36:41.9491804Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:36:41.9492358Z return fn(*args, **kwargs) 2023-05-06T13:36:41.9493138Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T13:36:41.9493767Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:36:41.9494653Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:36:41.9495446Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:36:41.9496424Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:36:41.9497028Z return fn(*args, **kwargs) 2023-05-06T13:36:41.9497985Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:36:41.9498586Z return _compile( 2023-05-06T13:36:41.9499450Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:36:41.9499995Z r = func(*args, **kwargs) 2023-05-06T13:36:41.9500850Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 470, in _compile 2023-05-06T13:36:41.9501448Z check_fn = CheckFunctionManager( 2023-05-06T13:36:41.9502328Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 747, in __init__ 2023-05-06T13:36:41.9502989Z guard.create(local_builder, global_builder) 2023-05-06T13:36:41.9503853Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_guards.py", line 196, in create 2023-05-06T13:36:41.9504606Z return self.create_fn(self.source.select(local_builder, global_builder), self) 2023-05-06T13:36:41.9505667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 516, in SHAPE_ENV 2023-05-06T13:36:41.9506318Z guards = output_graph.shape_env.produce_guards( 2023-05-06T13:36:41.9507298Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2400, in produce_guards 2023-05-06T13:36:41.9508064Z raise ConstraintViolationError(f"Constraints violated!\n{err}") 2023-05-06T13:36:41.9508820Z torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated! 2023-05-06T13:36:41.9510284Z 1. Could not validate constraint UnspecConstraint(L['inputs'][0].size()[0]) as L['inputs'][0].size()[0] is actually a non-atomic symbolic expression 32. Did you really mean to mark this dimension as dynamic? 2023-05-06T13:36:41.9510893Z 2023-05-06T13:36:41.9510905Z 2023-05-06T13:36:41.9511197Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:36:41.9512131Z import torch._dynamo 2023-05-06T13:36:41.9512609Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:36:41.9512922Z 2023-05-06T13:36:43.4687768Z ERROR 2023-05-06T13:36:49.0711516Z cuda eval maml ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:36:49.0712175Z Traceback (most recent call last): 2023-05-06T13:36:49.0713823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1308, in run_node 2023-05-06T13:36:49.0714183Z return node.target(*args, **kwargs) 2023-05-06T13:36:49.0714803Z TypeError: conv2d() received an invalid combination of arguments - got (FakeTensor, FakeTensor, FakeTensor, padding=int, stride=SymInt), but expected one of: 2023-05-06T13:36:49.0715795Z * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) 2023-05-06T13:36:49.0716296Z * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) 2023-05-06T13:36:49.0717080Z 2023-05-06T13:36:49.0717088Z 2023-05-06T13:36:49.0717351Z The above exception was the direct cause of the following exception: 2023-05-06T13:36:49.0717565Z 2023-05-06T13:36:49.0717681Z Traceback (most recent call last): 2023-05-06T13:36:49.0718247Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1256, in get_fake_value 2023-05-06T13:36:49.0718601Z return wrap_fake_exception( 2023-05-06T13:36:49.0719100Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 850, in wrap_fake_exception 2023-05-06T13:36:49.0719433Z return fn() 2023-05-06T13:36:49.0719963Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1257, in 2023-05-06T13:36:49.0720370Z lambda: run_node(tx.output, node, args, kwargs, nnmodule) 2023-05-06T13:36:49.0720890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1320, in run_node 2023-05-06T13:36:49.0721230Z raise RuntimeError( 2023-05-06T13:36:49.0722192Z RuntimeError: Failed running call_function (*(FakeTensor(..., device='cuda:0', size=(5, 1, 28, 28)), Parameter(FakeTensor(..., device='cuda:0', size=(64, 1, 3, 3), requires_grad=True)), Parameter(FakeTensor(..., device='cuda:0', size=(64,), requires_grad=True))), **{'stride': s4, 'padding': 0}): 2023-05-06T13:36:49.0723460Z conv2d() received an invalid combination of arguments - got (FakeTensor, FakeTensor, FakeTensor, padding=int, stride=SymInt), but expected one of: 2023-05-06T13:36:49.0723988Z * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups) 2023-05-06T13:36:49.0724478Z * (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups) 2023-05-06T13:36:49.0724743Z 2023-05-06T13:36:49.0724844Z (scroll up for backtrace) 2023-05-06T13:36:49.0724991Z 2023-05-06T13:36:49.0725159Z The above exception was the direct cause of the following exception: 2023-05-06T13:36:49.0725348Z 2023-05-06T13:36:49.0725463Z Traceback (most recent call last): 2023-05-06T13:36:49.0725803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:36:49.0726119Z fn(model, example_inputs) 2023-05-06T13:36:49.0726587Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:36:49.0726925Z return fn(*args, **kwargs) 2023-05-06T13:36:49.0727263Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:36:49.0727578Z return mod(*inputs) 2023-05-06T13:36:49.0728072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:36:49.0728444Z return self._call_impl(*args, **kwargs) 2023-05-06T13:36:49.0729221Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:36:49.0729563Z return forward_call(*args, **kwargs) 2023-05-06T13:36:49.0729985Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/meta.py", line 68, in forward 2023-05-06T13:36:49.0730375Z return self.finetunning(x_spt[0], y_spt[0], x_qry[0], y_qry[0]) 2023-05-06T13:36:49.0730778Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/meta.py", line 170, in finetunning 2023-05-06T13:36:49.0731107Z net = deepcopy(self.net) 2023-05-06T13:36:49.0731602Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T13:36:49.0732115Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:36:49.0732649Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:36:49.0733056Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:36:49.0733577Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:36:49.0733914Z return fn(*args, **kwargs) 2023-05-06T13:36:49.0734411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:36:49.0734761Z return _compile( 2023-05-06T13:36:49.0735231Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:36:49.0735547Z r = func(*args, **kwargs) 2023-05-06T13:36:49.0736036Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T13:36:49.0736416Z out_code = transform_code_object(code, transform) 2023-05-06T13:36:49.0736997Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T13:36:49.0737407Z transformations(instructions, code_options) 2023-05-06T13:36:49.0737932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T13:36:49.0738263Z tracer.run() 2023-05-06T13:36:49.0738723Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T13:36:49.0739049Z super().run() 2023-05-06T13:36:49.0739513Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:36:49.0739893Z and self.step() 2023-05-06T13:36:49.0740392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:36:49.0740745Z getattr(self, inst.opname)(inst) 2023-05-06T13:36:49.0741254Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 385, in wrapper 2023-05-06T13:36:49.0741593Z return inner_fn(self, inst) 2023-05-06T13:36:49.0742107Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1095, in CALL_FUNCTION 2023-05-06T13:36:49.0742468Z self.call_function(fn, args, {}) 2023-05-06T13:36:49.0742964Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 554, in call_function 2023-05-06T13:36:49.0743346Z self.push(fn.call_function(self, args, kwargs)) 2023-05-06T13:36:49.0743884Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 699, in call_function 2023-05-06T13:36:49.0744289Z ).call_function(tx, [self] + list(args), kwargs) 2023-05-06T13:36:49.0744822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 284, in call_function 2023-05-06T13:36:49.0745368Z return super().call_function(tx, args, kwargs) 2023-05-06T13:36:49.0745908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 117, in call_function 2023-05-06T13:36:49.0746268Z return tx.inline_user_function_return( 2023-05-06T13:36:49.0746824Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return 2023-05-06T13:36:49.0747294Z result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs) 2023-05-06T13:36:49.0747891Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call 2023-05-06T13:36:49.0748266Z return cls.inline_call_(parent, func, args, kwargs) 2023-05-06T13:36:49.0748904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_ 2023-05-06T13:36:49.0749245Z tracer.run() 2023-05-06T13:36:49.0749775Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:36:49.0750092Z and self.step() 2023-05-06T13:36:49.0750567Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:36:49.0750923Z getattr(self, inst.opname)(inst) 2023-05-06T13:36:49.0751415Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 385, in wrapper 2023-05-06T13:36:49.0751762Z return inner_fn(self, inst) 2023-05-06T13:36:49.0752281Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1135, in CALL_FUNCTION_EX 2023-05-06T13:36:49.0752925Z self.call_function(fn, argsvars.items, kwargsvars.items) 2023-05-06T13:36:49.0753996Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 554, in call_function 2023-05-06T13:36:49.0754861Z self.push(fn.call_function(self, args, kwargs)) 2023-05-06T13:36:49.0755699Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/misc.py", line 418, in call_function 2023-05-06T13:36:49.0756265Z return self.obj.call_method(tx, self.name, args, kwargs).add_options(self) 2023-05-06T13:36:49.0757261Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py", line 760, in call_method 2023-05-06T13:36:49.0757819Z return super().call_method(tx, name, args, kwargs) 2023-05-06T13:36:49.0758591Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 274, in call_method 2023-05-06T13:36:49.0759086Z ).call_function(tx, args, kwargs) 2023-05-06T13:36:49.0759906Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 321, in call_function 2023-05-06T13:36:49.0760453Z return super().call_function(tx, args, kwargs) 2023-05-06T13:36:49.0761207Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 284, in call_function 2023-05-06T13:36:49.0761753Z return super().call_function(tx, args, kwargs) 2023-05-06T13:36:49.0762508Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 117, in call_function 2023-05-06T13:36:49.0763035Z return tx.inline_user_function_return( 2023-05-06T13:36:49.0763817Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 590, in inline_user_function_return 2023-05-06T13:36:49.0764483Z result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs) 2023-05-06T13:36:49.0765358Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2115, in inline_call 2023-05-06T13:36:49.0765897Z return cls.inline_call_(parent, func, args, kwargs) 2023-05-06T13:36:49.0766673Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2193, in inline_call_ 2023-05-06T13:36:49.0767459Z tracer.run() 2023-05-06T13:36:49.0768159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:36:49.0768606Z and self.step() 2023-05-06T13:36:49.0769262Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:36:49.0769802Z getattr(self, inst.opname)(inst) 2023-05-06T13:36:49.0770467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 385, in wrapper 2023-05-06T13:36:49.0770940Z return inner_fn(self, inst) 2023-05-06T13:36:49.0771864Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1147, in CALL_FUNCTION_KW 2023-05-06T13:36:49.0772409Z self.call_function(fn, args, kwargs) 2023-05-06T13:36:49.0773120Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 554, in call_function 2023-05-06T13:36:49.0773679Z self.push(fn.call_function(self, args, kwargs)) 2023-05-06T13:36:49.0774436Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 625, in call_function 2023-05-06T13:36:49.0774934Z tensor_variable = wrap_fx_proxy( 2023-05-06T13:36:49.0775686Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 968, in wrap_fx_proxy 2023-05-06T13:36:49.0776190Z return wrap_fx_proxy_cls( 2023-05-06T13:36:49.0776935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1003, in wrap_fx_proxy_cls 2023-05-06T13:36:49.0777473Z example_value = get_fake_value(proxy.node, tx) 2023-05-06T13:36:49.0778199Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 1287, in get_fake_value 2023-05-06T13:36:49.0778694Z raise TorchRuntimeError() from e 2023-05-06T13:36:49.0779096Z torch._dynamo.exc.TorchRuntimeError: 2023-05-06T13:36:49.0779352Z 2023-05-06T13:36:49.0779475Z from user code: 2023-05-06T13:36:49.0780201Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:36:49.0780704Z return forward_call(*args, **kwargs) 2023-05-06T13:36:49.0781208Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/learner.py", line 144, in forward 2023-05-06T13:36:49.0781752Z x = F.conv2d(x, w, b, stride=param[4], padding=param[5]) 2023-05-06T13:36:49.0782009Z 2023-05-06T13:36:49.0782297Z Set torch._dynamo.config.verbose=True or TORCHDYNAMO_VERBOSE=1 for more information 2023-05-06T13:36:49.0782628Z 2023-05-06T13:36:49.0782635Z 2023-05-06T13:36:49.0782885Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:36:49.0783284Z import torch._dynamo 2023-05-06T13:36:49.0783681Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:36:49.0783944Z 2023-05-06T13:36:49.9941284Z ERROR 2023-05-06T13:36:58.7073951Z cuda eval maml_omniglot 1.317x 2023-05-06T13:37:22.3736969Z cuda eval mnasnet1_0 1.409x 2023-05-06T13:37:48.6704463Z cuda eval mobilenet_v2 1.478x 2023-05-06T13:37:52.7484686Z The eval test only supports CPU. 2023-05-06T13:37:52.7487750Z Traceback (most recent call last): 2023-05-06T13:37:52.7492831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T13:37:52.7493253Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T13:37:52.7493631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T13:37:52.7493961Z benchmark = benchmark_cls( 2023-05-06T13:37:52.7494537Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T13:37:52.7496692Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T13:37:52.7497813Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T13:37:52.7498263Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T13:37:52.7498621Z NotImplementedError: The eval test only supports CPU. 2023-05-06T13:37:52.7498817Z 2023-05-06T13:37:52.7502276Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T13:38:21.6302510Z cuda eval mobilenet_v3_large 1.360x 2023-05-06T13:38:28.7634229Z cuda eval moco [2023-05-06 13:38:28,762] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T13:56:25.8372819Z [2023-05-06 13:56:25,833] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T13:56:25.8373884Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T13:56:25.8381404Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T13:56:26.5239194Z [2023-05-06 13:56:26,523] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T13:56:39.1793994Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T13:56:39.1794917Z Traceback (most recent call last): 2023-05-06T13:56:39.1795279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T13:56:39.1795596Z fn(model, example_inputs) 2023-05-06T13:56:39.1796295Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T13:56:39.1796868Z return fn(*args, **kwargs) 2023-05-06T13:56:39.1797224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T13:56:39.1797549Z return mod(*inputs) 2023-05-06T13:56:39.1798092Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:56:39.1798477Z return self._call_impl(*args, **kwargs) 2023-05-06T13:56:39.1799633Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:56:39.1800154Z return forward_call(*args, **kwargs) 2023-05-06T13:56:39.1801106Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T13:56:39.1801881Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T13:56:39.1802897Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T13:56:39.1803347Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T13:56:39.1803911Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T13:56:39.1804295Z return self._call_impl(*args, **kwargs) 2023-05-06T13:56:39.1804780Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T13:56:39.1805140Z return forward_call(*args, **kwargs) 2023-05-06T13:56:39.1805520Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T13:56:39.1805902Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T13:56:39.1806321Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T13:56:39.1806723Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T13:56:39.1807141Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 139, in 2023-05-06T13:56:39.1807516Z k = self._batch_unshuffle_ddp(k, idx_unshuffle) 2023-05-06T13:56:39.1808364Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 158, in 2023-05-06T13:56:39.1808727Z self._dequeue_and_enqueue(k) 2023-05-06T13:56:39.1809230Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T13:56:39.1809586Z return func(*args, **kwargs) 2023-05-06T13:56:39.1809960Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 55, in _dequeue_and_enqueue 2023-05-06T13:56:39.1810325Z keys = concat_all_gather(keys) 2023-05-06T13:56:39.1810771Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 59, in 2023-05-06T13:56:39.1811315Z ptr = int(self.queue_ptr) 2023-05-06T13:56:39.1811825Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T13:56:39.1812226Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T13:56:39.1812781Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T13:56:39.1813183Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T13:56:39.1813709Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T13:56:39.1814033Z return fn(*args, **kwargs) 2023-05-06T13:56:39.1814544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T13:56:39.1814896Z return _compile( 2023-05-06T13:56:39.1815379Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:56:39.1815736Z r = func(*args, **kwargs) 2023-05-06T13:56:39.1816219Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T13:56:39.1816605Z out_code = transform_code_object(code, transform) 2023-05-06T13:56:39.1817166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T13:56:39.1817584Z transformations(instructions, code_options) 2023-05-06T13:56:39.1818105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T13:56:39.1818422Z tracer.run() 2023-05-06T13:56:39.1818890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T13:56:39.1819214Z super().run() 2023-05-06T13:56:39.1819690Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T13:56:39.1820007Z and self.step() 2023-05-06T13:56:39.1820480Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T13:56:39.1820882Z getattr(self, inst.opname)(inst) 2023-05-06T13:56:39.1821391Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T13:56:39.1821760Z self.output.compile_subgraph( 2023-05-06T13:56:39.1822278Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T13:56:39.1822705Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T13:56:39.1823068Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T13:56:39.1823370Z return func(*args, **kwds) 2023-05-06T13:56:39.1823902Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T13:56:39.1824278Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T13:56:39.1824901Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:56:39.1825236Z r = func(*args, **kwargs) 2023-05-06T13:56:39.1825763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T13:56:39.1826182Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T13:56:39.1826753Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T13:56:39.1827162Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T13:56:39.1827698Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T13:56:39.1828201Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T13:56:39.1828753Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T13:56:39.1829147Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T13:56:39.1829656Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T13:56:39.1830013Z return compile_fx(*args, **kwargs) 2023-05-06T13:56:39.1830515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T13:56:39.1830881Z return aot_autograd( 2023-05-06T13:56:39.1831375Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T13:56:39.1831776Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T13:56:39.1832346Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T13:56:39.1832727Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T13:56:39.1833243Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T13:56:39.1833574Z r = func(*args, **kwargs) 2023-05-06T13:56:39.1834102Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2959, in create_aot_dispatcher_function 2023-05-06T13:56:39.1834527Z fw_metadata = run_functionalized_fw_and_collect_metadata( 2023-05-06T13:56:39.1835062Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 719, in inner 2023-05-06T13:56:39.1835407Z flat_f_outs = f(*flat_f_args) 2023-05-06T13:56:39.1835907Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3259, in functional_call 2023-05-06T13:56:39.1836312Z out = Interpreter(mod).run(*args[params_len:], **kwargs) 2023-05-06T13:56:39.1837143Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T13:56:39.1837501Z self.env[node] = self.run_node(node) 2023-05-06T13:56:39.1838001Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T13:56:39.1838377Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T13:56:39.1838892Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 267, in call_function 2023-05-06T13:56:39.1839221Z return target(*args, **kwargs) 2023-05-06T13:56:39.1839730Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T13:56:39.1840104Z return replace_fn(func)(*args, **kwargs) 2023-05-06T13:56:39.1840537Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T13:56:39.1841045Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T13:56:39.1841245Z 2023-05-06T13:56:39.1841493Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T13:56:39.1842019Z Original traceback: 2023-05-06T13:56:39.1842390Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T13:56:39.1842762Z self.queue_ptr[0] = ptr 2023-05-06T13:56:39.1842911Z 2023-05-06T13:56:39.1842916Z 2023-05-06T13:56:39.1842921Z 2023-05-06T13:56:39.1843087Z You can suppress this exception and fall back to eager by setting: 2023-05-06T13:56:39.1843382Z import torch._dynamo 2023-05-06T13:56:39.1843646Z torch._dynamo.config.suppress_errors = True 2023-05-06T13:56:39.1843831Z 2023-05-06T13:56:44.0107444Z ERROR 2023-05-06T13:56:56.4521160Z cuda eval nvidia_deeprecommender 0.986x 2023-05-06T13:57:01.0864694Z cuda eval opacus_cifar10 [2023-05-06 13:57:01,085] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T13:57:01.0866662Z [2023-05-06 13:57:01,085] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T13:57:18.4511954Z 0.600x 2023-05-06T13:57:44.1948501Z cuda eval phlippe_densenet 1.548x 2023-05-06T13:57:59.2111677Z cuda eval phlippe_resnet 1.633x 2023-05-06T13:58:00.1802018Z abs_latency gmean=0.00x mean=13.931x 2023-05-06T13:58:00.1802398Z compilation_latency mean=18.540 seconds 2023-05-06T13:58:00.1803076Z compression_ratio mean=0.755x 2023-05-06T13:58:00.1805138Z eager_peak_mem gmean=0.00x mean=0.929x 2023-05-06T13:58:00.1807757Z dynamo_peak_mem gmean=0.00x mean=0.910x 2023-05-06T13:58:00.1810316Z calls_captured gmean=0.00x mean=510.731x 2023-05-06T13:58:00.1812978Z unique_graphs gmean=0.00x mean=11.885x 2023-05-06T13:58:00.1815280Z graph_breaks gmean=0.00x mean=3.846x 2023-05-06T13:58:00.1817587Z unique_graph_breaks gmean=0.00x mean=0.692x 2023-05-06T13:58:00.6903025Z + [[ inference == \i\n\f\e\r\e\n\c\e ]] 2023-05-06T13:58:00.6906558Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --inference --amp --backend inductor --disable-cudagraphs --cpp-wrapper --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_cpp_wrapper_torchbench_amp_inference_cuda_performance.csv 2023-05-06T13:58:42.0912521Z cuda eval functorch_maml_omniglot 2.225x 2023-05-06T13:59:45.8543686Z cuda eval hf_Albert 1.990x 2023-05-06T14:02:42.0469146Z cuda eval hf_Bart 1.460x 2023-05-06T14:03:46.4023612Z cuda eval hf_Bert 1.618x 2023-05-06T14:05:23.6725487Z cuda eval hf_Bert_large 2.091x 2023-05-06T14:11:22.8991402Z cuda eval hf_BigBird 1.685x 2023-05-06T14:12:17.0225156Z cuda eval hf_DistilBert 1.395x 2023-05-06T14:13:27.3226458Z cuda eval hf_GPT2 1.763x 2023-05-06T14:15:38.5369038Z cuda eval hf_GPT2_large 2.293x 2023-05-06T14:19:32.8663228Z cuda eval hf_Longformer 1.332x 2023-05-06T14:24:29.9815220Z cuda eval hf_Reformer [2023-05-06 14:24:29,979] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T14:24:31.6723722Z [2023-05-06 14:24:31,671] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T14:24:31.7094138Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T14:24:31.7094719Z Traceback (most recent call last): 2023-05-06T14:24:31.7097790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T14:24:31.7098141Z fn(model, example_inputs) 2023-05-06T14:24:31.7098824Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T14:24:31.7099166Z return fn(*args, **kwargs) 2023-05-06T14:24:31.7099490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T14:24:31.7099940Z return mod(*inputs) 2023-05-06T14:24:31.7100825Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7101208Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7101715Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7102093Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7102752Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2401, in forward 2023-05-06T14:24:31.7103347Z reformer_outputs = self.reformer( 2023-05-06T14:24:31.7104361Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7104804Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7105317Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7105659Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7106700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2056, in forward 2023-05-06T14:24:31.7107488Z least_common_mult_chunk_length = _get_least_common_mult_chunk_len(self.config) 2023-05-06T14:24:31.7108565Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2057, in 2023-05-06T14:24:31.7109356Z min_chunk_length = _get_min_chunk_len(self.config) 2023-05-06T14:24:31.7109954Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2093, in 2023-05-06T14:24:31.7110349Z embedding_output = self.embeddings( 2023-05-06T14:24:31.7110925Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 2100, in 2023-05-06T14:24:31.7111314Z encoder_outputs = self.encoder( 2023-05-06T14:24:31.7111821Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7112197Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7112694Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7113030Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7113568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1727, in forward 2023-05-06T14:24:31.7113974Z hidden_states = _ReversibleFunction.apply( 2023-05-06T14:24:31.7114491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply 2023-05-06T14:24:31.7114861Z return super().apply(*args, **kwargs) # type: ignore[misc] 2023-05-06T14:24:31.7115439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1615, in forward 2023-05-06T14:24:31.7115857Z layer_outputs = layer( 2023-05-06T14:24:31.7116343Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7117344Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7117871Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7118227Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7118757Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1480, in forward 2023-05-06T14:24:31.7119137Z attn_outputs = self.attention( 2023-05-06T14:24:31.7119646Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7120002Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7120644Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7121000Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7121551Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/reformer/modeling_reformer.py", line 1313, in forward 2023-05-06T14:24:31.7121937Z self_attention_outputs = self.self_attention( 2023-05-06T14:24:31.7122465Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:24:31.7122830Z return self._call_impl(*args, **kwargs) 2023-05-06T14:24:31.7123315Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:24:31.7123669Z return forward_call(*args, **kwargs) 2023-05-06T14:24:31.7124166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T14:24:31.7124555Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T14:24:31.7125075Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T14:24:31.7125483Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T14:24:31.7126050Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T14:24:31.7126374Z return fn(*args, **kwargs) 2023-05-06T14:24:31.7126887Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T14:24:31.7127230Z return _compile( 2023-05-06T14:24:31.7127697Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:24:31.7128012Z r = func(*args, **kwargs) 2023-05-06T14:24:31.7128491Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T14:24:31.7128866Z out_code = transform_code_object(code, transform) 2023-05-06T14:24:31.7129432Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T14:24:31.7129850Z transformations(instructions, code_options) 2023-05-06T14:24:31.7130368Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T14:24:31.7130695Z tracer.run() 2023-05-06T14:24:31.7131149Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T14:24:31.7131471Z super().run() 2023-05-06T14:24:31.7131935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T14:24:31.7132247Z and self.step() 2023-05-06T14:24:31.7132719Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T14:24:31.7133066Z getattr(self, inst.opname)(inst) 2023-05-06T14:24:31.7133564Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T14:24:31.7134088Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T14:24:31.7134636Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T14:24:31.7135057Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T14:24:31.7135415Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T14:24:31.7135713Z return func(*args, **kwds) 2023-05-06T14:24:31.7136276Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T14:24:31.7136770Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T14:24:31.7137269Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:24:31.7137606Z r = func(*args, **kwargs) 2023-05-06T14:24:31.7138106Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T14:24:31.7138526Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T14:24:31.7139093Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T14:24:31.7139492Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T14:24:31.7140031Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T14:24:31.7140397Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T14:24:31.7140915Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T14:24:31.7141272Z return compile_fx(*args, **kwargs) 2023-05-06T14:24:31.7141755Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T14:24:31.7142115Z return compile_fx_with_cpp_wrapper( 2023-05-06T14:24:31.7142654Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_with_cpp_wrapper 2023-05-06T14:24:31.7143008Z return compile_fx( 2023-05-06T14:24:31.7143475Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T14:24:31.7143810Z return aot_autograd( 2023-05-06T14:24:31.7144296Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T14:24:31.7144681Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T14:24:31.7145249Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T14:24:31.7145647Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T14:24:31.7146199Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:24:31.7146515Z r = func(*args, **kwargs) 2023-05-06T14:24:31.7147055Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T14:24:31.7147516Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:24:31.7148090Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T14:24:31.7148530Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:24:31.7149133Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T14:24:31.7149571Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:24:31.7150363Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T14:24:31.7150768Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T14:24:31.7151281Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:24:31.7151614Z r = func(*args, **kwargs) 2023-05-06T14:24:31.7152096Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T14:24:31.7152440Z return inner_compile( 2023-05-06T14:24:31.7152932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T14:24:31.7153416Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T14:24:31.7153931Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T14:24:31.7154266Z return fn(*args, **kwargs) 2023-05-06T14:24:31.7154578Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T14:24:31.7155042Z return func(*args, **kwds) 2023-05-06T14:24:31.7155544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T14:24:31.7155949Z compiled_fn = graph.compile_to_fn() 2023-05-06T14:24:31.7156439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T14:24:31.7157097Z return self.compile_to_module().call 2023-05-06T14:24:31.7157618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:24:31.7157951Z r = func(*args, **kwargs) 2023-05-06T14:24:31.7158425Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T14:24:31.7158786Z code, linemap = self.codegen() 2023-05-06T14:24:31.7159272Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 641, in codegen 2023-05-06T14:24:31.7159591Z self.init_wrapper_code() 2023-05-06T14:24:31.7160087Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 628, in init_wrapper_code 2023-05-06T14:24:31.7160507Z assert self.cpp_wrapper, "CudaWrapperCodeGen hit unsupported case" 2023-05-06T14:24:31.7160987Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T14:24:31.7161349Z AssertionError: CudaWrapperCodeGen hit unsupported case 2023-05-06T14:24:31.7161582Z 2023-05-06T14:24:31.7161589Z 2023-05-06T14:24:31.7161758Z You can suppress this exception and fall back to eager by setting: 2023-05-06T14:24:31.7162044Z import torch._dynamo 2023-05-06T14:24:31.7162310Z torch._dynamo.config.suppress_errors = True 2023-05-06T14:24:31.7162496Z 2023-05-06T14:24:32.9288680Z ERROR 2023-05-06T14:30:42.9505998Z cuda eval hf_T5 1.698x 2023-05-06T14:37:12.9814910Z cuda eval hf_T5_base 1.514x 2023-05-06T14:44:35.2498631Z cuda eval hf_T5_large 1.852x 2023-05-06T14:45:12.7173657Z cuda eval lennard_jones 1.274x 2023-05-06T14:45:34.9241214Z cuda eval llama ERROR:common:Backend dynamo failed in warmup() 2023-05-06T14:45:34.9241939Z Traceback (most recent call last): 2023-05-06T14:45:34.9242487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T14:45:34.9243061Z fn(model, example_inputs) 2023-05-06T14:45:34.9245407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T14:45:34.9246002Z return fn(*args, **kwargs) 2023-05-06T14:45:34.9246893Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T14:45:34.9248297Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T14:45:34.9249168Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T14:45:34.9249576Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T14:45:34.9250168Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T14:45:34.9250570Z return fn(*args, **kwargs) 2023-05-06T14:45:34.9251071Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T14:45:34.9251418Z return _compile( 2023-05-06T14:45:34.9252097Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:45:34.9252421Z r = func(*args, **kwargs) 2023-05-06T14:45:34.9252914Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T14:45:34.9253293Z out_code = transform_code_object(code, transform) 2023-05-06T14:45:34.9253873Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T14:45:34.9254272Z transformations(instructions, code_options) 2023-05-06T14:45:34.9254788Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T14:45:34.9255115Z tracer.run() 2023-05-06T14:45:34.9255571Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T14:45:34.9255899Z super().run() 2023-05-06T14:45:34.9256366Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T14:45:34.9256690Z and self.step() 2023-05-06T14:45:34.9257153Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T14:45:34.9257501Z getattr(self, inst.opname)(inst) 2023-05-06T14:45:34.9258020Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T14:45:34.9258370Z self.output.compile_subgraph( 2023-05-06T14:45:34.9258882Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T14:45:34.9259305Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T14:45:34.9259673Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T14:45:34.9259964Z return func(*args, **kwds) 2023-05-06T14:45:34.9260533Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T14:45:34.9260930Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T14:45:34.9261417Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:45:34.9261748Z r = func(*args, **kwargs) 2023-05-06T14:45:34.9262245Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T14:45:34.9262669Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T14:45:34.9263223Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T14:45:34.9263621Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T14:45:34.9264169Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T14:45:34.9264540Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T14:45:34.9265184Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T14:45:34.9265539Z return compile_fx(*args, **kwargs) 2023-05-06T14:45:34.9266038Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T14:45:34.9266379Z return compile_fx_with_cpp_wrapper( 2023-05-06T14:45:34.9266920Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 575, in compile_fx_with_cpp_wrapper 2023-05-06T14:45:34.9267274Z return compile_fx( 2023-05-06T14:45:34.9267740Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T14:45:34.9268077Z return aot_autograd( 2023-05-06T14:45:34.9268666Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T14:45:34.9269071Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T14:45:34.9269620Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T14:45:34.9270011Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T14:45:34.9270558Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:45:34.9270876Z r = func(*args, **kwargs) 2023-05-06T14:45:34.9271411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T14:45:34.9271871Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:45:34.9272463Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T14:45:34.9272883Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:45:34.9273485Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T14:45:34.9273922Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T14:45:34.9274498Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T14:45:34.9274882Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T14:45:34.9275392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:45:34.9275721Z r = func(*args, **kwargs) 2023-05-06T14:45:34.9276206Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T14:45:34.9276551Z return inner_compile( 2023-05-06T14:45:34.9277208Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T14:45:34.9277602Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T14:45:34.9278094Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T14:45:34.9278423Z return fn(*args, **kwargs) 2023-05-06T14:45:34.9278734Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T14:45:34.9279018Z return func(*args, **kwds) 2023-05-06T14:45:34.9279511Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 210, in compile_fx_inner 2023-05-06T14:45:34.9279865Z graph.run(*example_inputs) 2023-05-06T14:45:34.9280380Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:45:34.9280710Z r = func(*args, **kwargs) 2023-05-06T14:45:34.9281169Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 249, in run 2023-05-06T14:45:34.9281631Z return super().run(*args) 2023-05-06T14:45:34.9282088Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T14:45:34.9282431Z self.env[node] = self.run_node(node) 2023-05-06T14:45:34.9282918Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 476, in run_node 2023-05-06T14:45:34.9283294Z result = fallback_handler(n.target, add_to_fallback_set=False)( 2023-05-06T14:45:34.9283822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 1043, in handler 2023-05-06T14:45:34.9284236Z TensorBox.create, ir.FallbackKernel.create(kernel, *args, **kwargs) 2023-05-06T14:45:34.9284900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3182, in create 2023-05-06T14:45:34.9285225Z packed = FallbackKernel( 2023-05-06T14:45:34.9285700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 3122, in __init__ 2023-05-06T14:45:34.9286012Z assert ( 2023-05-06T14:45:34.9286393Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T14:45:34.9286769Z AssertionError: slice.Tensor is not supported with cpp wrapper 2023-05-06T14:45:34.9286973Z 2023-05-06T14:45:34.9286980Z 2023-05-06T14:45:34.9287144Z You can suppress this exception and fall back to eager by setting: 2023-05-06T14:45:34.9287429Z import torch._dynamo 2023-05-06T14:45:34.9287691Z torch._dynamo.config.suppress_errors = True 2023-05-06T14:45:34.9287871Z 2023-05-06T14:45:35.9859955Z ERROR 2023-05-06T14:46:42.2202001Z cuda eval maml ERROR:common:Backend dynamo failed in warmup() 2023-05-06T14:46:42.2202589Z Traceback (most recent call last): 2023-05-06T14:46:42.2202945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T14:46:42.2205693Z fn(model, example_inputs) 2023-05-06T14:46:42.2207718Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T14:46:42.2208265Z return fn(*args, **kwargs) 2023-05-06T14:46:42.2208924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T14:46:42.2209570Z return mod(*inputs) 2023-05-06T14:46:42.2210435Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T14:46:42.2211239Z return self._call_impl(*args, **kwargs) 2023-05-06T14:46:42.2212214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T14:46:42.2212872Z return forward_call(*args, **kwargs) 2023-05-06T14:46:42.2213284Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/meta.py", line 68, in forward 2023-05-06T14:46:42.2213676Z return self.finetunning(x_spt[0], y_spt[0], x_qry[0], y_qry[0]) 2023-05-06T14:46:42.2214090Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/meta.py", line 170, in finetunning 2023-05-06T14:46:42.2214425Z net = deepcopy(self.net) 2023-05-06T14:46:42.2214799Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/maml/meta.py", line 175, in 2023-05-06T14:46:42.2215205Z grad = torch.autograd.grad(loss, net.parameters()) 2023-05-06T14:46:42.2215722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T14:46:42.2216152Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T14:46:42.2216731Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/function.py", line 274, in apply 2023-05-06T14:46:42.2217070Z return user_fn(self, *args) 2023-05-06T14:46:42.2217551Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2818, in backward 2023-05-06T14:46:42.2218410Z out = call_compiled_backward() 2023-05-06T14:46:42.2218965Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2788, in call_compiled_backward 2023-05-06T14:46:42.2219393Z CompiledFunction.compiled_bw = aot_config.bw_compiler( 2023-05-06T14:46:42.2219950Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 34, in _wrapped_bw_compiler 2023-05-06T14:46:42.2220394Z return eval_frame.disable(eval_frame.disable(bw_compiler)(*args, **kwargs)) 2023-05-06T14:46:42.2220989Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T14:46:42.2221306Z return fn(*args, **kwargs) 2023-05-06T14:46:42.2221960Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T14:46:42.2222312Z return fn(*args, **kwargs) 2023-05-06T14:46:42.2222801Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:46:42.2223117Z r = func(*args, **kwargs) 2023-05-06T14:46:42.2223600Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 712, in bw_compiler 2023-05-06T14:46:42.2223946Z return inner_compile( 2023-05-06T14:46:42.2224425Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T14:46:42.2224812Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T14:46:42.2225319Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T14:46:42.2225650Z return fn(*args, **kwargs) 2023-05-06T14:46:42.2225951Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T14:46:42.2226249Z return func(*args, **kwds) 2023-05-06T14:46:42.2226756Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T14:46:42.2227108Z compiled_fn = graph.compile_to_fn() 2023-05-06T14:46:42.2227606Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T14:46:42.2227964Z return self.compile_to_module().call 2023-05-06T14:46:42.2228460Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:46:42.2228777Z r = func(*args, **kwargs) 2023-05-06T14:46:42.2229268Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T14:46:42.2229628Z code, linemap = self.codegen() 2023-05-06T14:46:42.2230098Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 645, in codegen 2023-05-06T14:46:42.2230442Z self.scheduler.codegen() 2023-05-06T14:46:42.2231012Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T14:46:42.2231362Z r = func(*args, **kwargs) 2023-05-06T14:46:42.2231830Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/scheduler.py", line 1245, in codegen 2023-05-06T14:46:42.2232221Z self.get_backend(device).codegen_nodes(node.get_nodes()) 2023-05-06T14:46:42.2232772Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/triton.py", line 1800, in codegen_nodes 2023-05-06T14:46:42.2233172Z return self.codegen_node_schedule(node_schedule, numel, rnumel) 2023-05-06T14:46:42.2233748Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/triton.py", line 1920, in codegen_node_schedule 2023-05-06T14:46:42.2234168Z kernel.call_kernel(V.graph.wrapper_code, kernel_name) 2023-05-06T14:46:42.2234710Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/triton.py", line 1635, in call_kernel 2023-05-06T14:46:42.2235217Z V.graph.wrapper_code.generate_kernel_call( 2023-05-06T14:46:42.2235780Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 1108, in generate_kernel_call 2023-05-06T14:46:42.2236134Z params is not None 2023-05-06T14:46:42.2236434Z AssertionError: cuda kernel parameters should already exist at this moment 2023-05-06T14:46:43.2739990Z ERROR 2023-05-06T14:47:19.7181734Z cuda eval maml_omniglot 2.190x 2023-05-06T14:48:15.9323055Z cuda eval mnasnet1_0 1.517x 2023-05-06T14:49:13.4091718Z cuda eval mobilenet_v2 1.535x 2023-05-06T14:49:17.5716312Z The eval test only supports CPU. 2023-05-06T14:49:17.5721239Z Traceback (most recent call last): 2023-05-06T14:49:17.5721636Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T14:49:17.5722016Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T14:49:17.5722421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 308, in load_model 2023-05-06T14:49:17.5722734Z benchmark = benchmark_cls( 2023-05-06T14:49:17.5723893Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T14:49:17.5724251Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T14:49:17.5724644Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/mobilenet_v2_quantized_qat/__init__.py", line 21, in __init__ 2023-05-06T14:49:17.5725061Z raise NotImplementedError("The eval test only supports CPU.") 2023-05-06T14:49:17.5725414Z NotImplementedError: The eval test only supports CPU. 2023-05-06T14:49:17.5727964Z 2023-05-06T14:49:17.5728539Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T14:50:18.0861902Z cuda eval mobilenet_v3_large 1.433x 2023-05-06T14:50:25.1879730Z cuda eval moco [2023-05-06 14:50:25,186] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T14:58:58.6679478Z [2023-05-06 14:58:58,665] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T14:58:58.6680406Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T14:58:58.6681825Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T14:59:47.4265629Z [2023-05-06 14:59:47,424] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T15:00:06.0914455Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T15:00:06.0914984Z Traceback (most recent call last): 2023-05-06T15:00:06.0915334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T15:00:06.0915684Z fn(model, example_inputs) 2023-05-06T15:00:06.0917974Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:00:06.0918339Z return fn(*args, **kwargs) 2023-05-06T15:00:06.0918689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 392, in forward_pass 2023-05-06T15:00:06.0920516Z return mod(*inputs) 2023-05-06T15:00:06.0921603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:00:06.0922349Z return self._call_impl(*args, **kwargs) 2023-05-06T15:00:06.0922971Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:00:06.0923315Z return forward_call(*args, **kwargs) 2023-05-06T15:00:06.0923842Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T15:00:06.0924223Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T15:00:06.0925149Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T15:00:06.0925633Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T15:00:06.0926186Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:00:06.0926670Z return self._call_impl(*args, **kwargs) 2023-05-06T15:00:06.0927442Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:00:06.0927982Z return forward_call(*args, **kwargs) 2023-05-06T15:00:06.0928840Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T15:00:06.0929450Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T15:00:06.0930057Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T15:00:06.0930681Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T15:00:06.0931738Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T15:00:06.0932399Z return func(*args, **kwargs) 2023-05-06T15:00:06.0932811Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 76, in _batch_shuffle_ddp 2023-05-06T15:00:06.0933159Z x_gather = concat_all_gather(x) 2023-05-06T15:00:06.0933667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T15:00:06.0934068Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T15:00:06.0934606Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T15:00:06.0935010Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T15:00:06.0935585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T15:00:06.0935924Z return fn(*args, **kwargs) 2023-05-06T15:00:06.0936420Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T15:00:06.0936765Z return _compile( 2023-05-06T15:00:06.0937231Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:00:06.0937551Z r = func(*args, **kwargs) 2023-05-06T15:00:06.0938029Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T15:00:06.0938413Z out_code = transform_code_object(code, transform) 2023-05-06T15:00:06.0938996Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T15:00:06.0939407Z transformations(instructions, code_options) 2023-05-06T15:00:06.0939921Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T15:00:06.0940250Z tracer.run() 2023-05-06T15:00:06.0940704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T15:00:06.0941027Z super().run() 2023-05-06T15:00:06.0941487Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T15:00:06.0941816Z and self.step() 2023-05-06T15:00:06.0942272Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T15:00:06.0942630Z getattr(self, inst.opname)(inst) 2023-05-06T15:00:06.0943194Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T15:00:06.0943762Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T15:00:06.0944313Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T15:00:06.0944732Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T15:00:06.0945107Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:00:06.0945441Z return func(*args, **kwds) 2023-05-06T15:00:06.0945968Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T15:00:06.0946354Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T15:00:06.0946958Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:00:06.0947294Z r = func(*args, **kwargs) 2023-05-06T15:00:06.0947798Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T15:00:06.0948224Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T15:00:06.0948794Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T15:00:06.0949191Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T15:00:06.0949738Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T15:00:06.0950121Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T15:00:06.0950661Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T15:00:06.0975478Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T15:00:06.0976503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T15:00:06.0977155Z return compile_fx(*args, **kwargs) 2023-05-06T15:00:06.0978028Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 628, in compile_fx 2023-05-06T15:00:06.0978643Z return compile_fx_with_cpp_wrapper( 2023-05-06T15:00:06.0979490Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 533, in compile_fx_with_cpp_wrapper 2023-05-06T15:00:06.0980103Z return compile_fx( 2023-05-06T15:00:06.0981193Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T15:00:06.0981713Z return aot_autograd( 2023-05-06T15:00:06.0982638Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T15:00:06.0983230Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T15:00:06.0984149Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T15:00:06.0984751Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T15:00:06.0985599Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:00:06.0986110Z r = func(*args, **kwargs) 2023-05-06T15:00:06.0987035Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T15:00:06.0987841Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:00:06.0988730Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T15:00:06.0989400Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:00:06.0990345Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T15:00:06.0991386Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:00:06.0992315Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T15:00:06.0992988Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T15:00:06.0993861Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:00:06.0994430Z r = func(*args, **kwargs) 2023-05-06T15:00:06.0995302Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T15:00:06.0996164Z return inner_compile( 2023-05-06T15:00:06.0997171Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T15:00:06.0997846Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T15:00:06.0998729Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T15:00:06.0999250Z return fn(*args, **kwargs) 2023-05-06T15:00:06.0999684Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:00:06.1000103Z return func(*args, **kwds) 2023-05-06T15:00:06.1000817Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T15:00:06.1001345Z compiled_fn = graph.compile_to_fn() 2023-05-06T15:00:06.1001853Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T15:00:06.1002218Z return self.compile_to_module().call 2023-05-06T15:00:06.1002703Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:00:06.1003046Z r = func(*args, **kwargs) 2023-05-06T15:00:06.1003531Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 695, in compile_to_module 2023-05-06T15:00:06.1003903Z mod = PyCodeCache.load(code, linemap=linemap) 2023-05-06T15:00:06.1004411Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 706, in load 2023-05-06T15:00:06.1004777Z return cls.load_by_key_path(key, path, linemap) 2023-05-06T15:00:06.1005305Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 721, in load_by_key_path 2023-05-06T15:00:06.1005717Z exec(code, mod.__dict__, mod.__dict__) 2023-05-06T15:00:06.1006138Z File "/tmp/tmp_nl687uf/gb/cgbhxueob4q3brhsiulgl22ovioyww2dkgdtbx6hcpqvg4pae2rq.py", line 56, in 2023-05-06T15:00:06.1006520Z module = load_inline( 2023-05-06T15:00:06.1006988Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1433, in load_inline 2023-05-06T15:00:06.1007332Z return _jit_compile( 2023-05-06T15:00:06.1007818Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1508, in _jit_compile 2023-05-06T15:00:06.1008178Z _write_ninja_file_and_build_library( 2023-05-06T15:00:06.1008716Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1623, in _write_ninja_file_and_build_library 2023-05-06T15:00:06.1009071Z _run_ninja_build( 2023-05-06T15:00:06.1009558Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1910, in _run_ninja_build 2023-05-06T15:00:06.1009910Z raise RuntimeError(message) from e 2023-05-06T15:00:06.1010354Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T15:00:06.1014249Z RuntimeError: Error building extension 'inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7': [1/2] c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp -o main.o 2023-05-06T15:00:06.1017126Z FAILED: main.o 2023-05-06T15:00:06.1020392Z c++ -MMD -MF main.o.d -DTORCH_EXTENSION_NAME=inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/var/lib/jenkins/workspace/-I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -I/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda/include -I/opt/conda/envs/py_3.10/include/python3.10 -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -std=c++17 -Wno-unused-variable -O3 -ffast-math -fno-finite-math-only -march=native -fopenmp -Wall -D C10_USING_CUSTOM_GENERATED_MACROS -c /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp -o main.o 2023-05-06T15:00:06.1022959Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:53: warning: multi-character character constant [-Wmultichar] 2023-05-06T15:00:06.1024197Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1024525Z ^~~~~ 2023-05-06T15:00:06.1025855Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp: In function ‘std::vector inductor_entry_cpp(const std::vector&)’: 2023-05-06T15:00:06.1027007Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:34: error: ‘device’ was not declared in this scope 2023-05-06T15:00:06.1027635Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1027929Z ^~~~~~ 2023-05-06T15:00:06.1028413Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:34: note: suggested alternatives: 2023-05-06T15:00:06.1029125Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T15:00:06.1029858Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T15:00:06.1030421Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T15:00:06.1030963Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T15:00:06.1031537Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T15:00:06.1032131Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T15:00:06.1032831Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T15:00:06.1033417Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T15:00:06.1034012Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T15:00:06.1034625Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T15:00:06.1035192Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1035775Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1036550Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T15:00:06.1037287Z inline TensorOptions device(Device device) { 2023-05-06T15:00:06.1037544Z ^~~~~~ 2023-05-06T15:00:06.1038056Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T15:00:06.1038648Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T15:00:06.1039233Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T15:00:06.1039809Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T15:00:06.1040389Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T15:00:06.1041034Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T15:00:06.1041700Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T15:00:06.1042374Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T15:00:06.1043020Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T15:00:06.1043647Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T15:00:06.1044280Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T15:00:06.1044893Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T15:00:06.1045533Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T15:00:06.1046274Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1046811Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1047613Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T15:00:06.1047988Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1048195Z ^ 2023-05-06T15:00:06.1048753Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T15:00:06.1049239Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1049448Z ^ 2023-05-06T15:00:06.1050008Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T15:00:06.1050380Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1050586Z ^ 2023-05-06T15:00:06.1051046Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorImpl.h:11:0, 2023-05-06T15:00:06.1051637Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:20, 2023-05-06T15:00:06.1052193Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T15:00:06.1052717Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T15:00:06.1053294Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T15:00:06.1053894Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T15:00:06.1054479Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T15:00:06.1055059Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T15:00:06.1055716Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T15:00:06.1056322Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T15:00:06.1056882Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1057408Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1058170Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T15:00:06.1058554Z inline TensorOptions device(Device device) { 2023-05-06T15:00:06.1058792Z ^~~~~~ 2023-05-06T15:00:06.1059354Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T15:00:06.1060034Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/core/TensorOptions.h:584:22: note: ‘c10::device’ 2023-05-06T15:00:06.1060636Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T15:00:06.1061229Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T15:00:06.1061844Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T15:00:06.1062459Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T15:00:06.1063146Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T15:00:06.1063790Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T15:00:06.1064453Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T15:00:06.1065113Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T15:00:06.1065903Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T15:00:06.1066567Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T15:00:06.1067213Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T15:00:06.1067827Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T15:00:06.1068411Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T15:00:06.1068972Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1069509Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1070311Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::prim::device’ 2023-05-06T15:00:06.1070672Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1070893Z ^ 2023-05-06T15:00:06.1071447Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::device’ 2023-05-06T15:00:06.1071798Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1072017Z ^ 2023-05-06T15:00:06.1072570Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::device’ 2023-05-06T15:00:06.1072914Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1073127Z ^ 2023-05-06T15:00:06.1073783Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:48: error: ‘type’ was not declared in this scope 2023-05-06T15:00:06.1074436Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1074725Z ^~~~ 2023-05-06T15:00:06.1075196Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:48: note: suggested alternatives: 2023-05-06T15:00:06.1075998Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/type_caster_base.h:12:0, 2023-05-06T15:00:06.1076739Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/cast.h:15, 2023-05-06T15:00:06.1077438Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/attr.h:14, 2023-05-06T15:00:06.1078006Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/detail/class.h:12, 2023-05-06T15:00:06.1078585Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:13, 2023-05-06T15:00:06.1079139Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/Exceptions.h:14, 2023-05-06T15:00:06.1079918Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/python.h:11, 2023-05-06T15:00:06.1080498Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:6, 2023-05-06T15:00:06.1081035Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1081813Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/pybind11/pytypes.h:1412:7: note: ‘pybind11::type’ 2023-05-06T15:00:06.1082171Z class type : public object { 2023-05-06T15:00:06.1082395Z ^~~~ 2023-05-06T15:00:06.1083011Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/profiler/stubs/base.h:6:0, 2023-05-06T15:00:06.1083639Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler_kineto.h:8, 2023-05-06T15:00:06.1084250Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/profiler.h:3, 2023-05-06T15:00:06.1084858Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:7, 2023-05-06T15:00:06.1085546Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5, 2023-05-06T15:00:06.1086142Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3, 2023-05-06T15:00:06.1086746Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:16, 2023-05-06T15:00:06.1087318Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1087860Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1088614Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/c10/util/strong_type.h:87:7: note: ‘strong::type’ 2023-05-06T15:00:06.1089007Z class type : public modifier>... 2023-05-06T15:00:06.1089260Z ^~~~ 2023-05-06T15:00:06.1089934Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:61: error: ‘pin_memory’ was not declared in this scope 2023-05-06T15:00:06.1090586Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1090949Z ^~~~~~~~~~ 2023-05-06T15:00:06.1091648Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:61: note: suggested alternatives: 2023-05-06T15:00:06.1092767Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T15:00:06.1093849Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T15:00:06.1094914Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T15:00:06.1095999Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T15:00:06.1096920Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T15:00:06.1097978Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T15:00:06.1099084Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T15:00:06.1100559Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T15:00:06.1101617Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T15:00:06.1102666Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T15:00:06.1103774Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T15:00:06.1105073Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T15:00:06.1105797Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T15:00:06.1106384Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1106927Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1107792Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T15:00:06.1108157Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1108377Z ^ 2023-05-06T15:00:06.1108948Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T15:00:06.1109320Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1109535Z ^ 2023-05-06T15:00:06.1110009Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/MethodOperators.h:302:0, 2023-05-06T15:00:06.1110615Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/TensorBody.h:40, 2023-05-06T15:00:06.1111165Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/Tensor.h:3, 2023-05-06T15:00:06.1111705Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/Tensor.h:3, 2023-05-06T15:00:06.1112286Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/function_hook.h:3, 2023-05-06T15:00:06.1112882Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/cpp_hook.h:2, 2023-05-06T15:00:06.1113477Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/variable.h:6, 2023-05-06T15:00:06.1114069Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/autograd/autograd.h:3, 2023-05-06T15:00:06.1114684Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/autograd.h:3, 2023-05-06T15:00:06.1115289Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7, 2023-05-06T15:00:06.1115889Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1116426Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1117917Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_ops.h:17:18: note: ‘at::_ops::pin_memory’ 2023-05-06T15:00:06.1118606Z struct TORCH_API pin_memory { 2023-05-06T15:00:06.1118938Z ^~~~~~~~~~ 2023-05-06T15:00:06.1119705Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/NativeFunctions.h:945:0, 2023-05-06T15:00:06.1120854Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/TensorIndexing.h:13, 2023-05-06T15:00:06.1121619Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ATen.h:18, 2023-05-06T15:00:06.1122436Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3, 2023-05-06T15:00:06.1123349Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4, 2023-05-06T15:00:06.1124281Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3, 2023-05-06T15:00:06.1125446Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:4, 2023-05-06T15:00:06.1126393Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3, 2023-05-06T15:00:06.1127266Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3, 2023-05-06T15:00:06.1128111Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T15:00:06.1128896Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1129639Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1130799Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/ops/pin_memory_native.h:19:22: note: ‘at::native::pin_memory’ 2023-05-06T15:00:06.1131461Z TORCH_API at::Tensor pin_memory(const at::Tensor & self, c10::optional device=c10::nullopt); 2023-05-06T15:00:06.1131894Z ^~~~~~~~~~ 2023-05-06T15:00:06.1132571Z In file included from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/ir/ir.h:18:0, 2023-05-06T15:00:06.1133447Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/function_impl.h:4, 2023-05-06T15:00:06.1134291Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/method.h:7, 2023-05-06T15:00:06.1135092Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/object.h:6, 2023-05-06T15:00:06.1136014Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/jit/api/module.h:4, 2023-05-06T15:00:06.1136975Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/input-archive.h:6, 2023-05-06T15:00:06.1137913Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/serialize/archive.h:3, 2023-05-06T15:00:06.1138856Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers/serialize.h:4, 2023-05-06T15:00:06.1139788Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/samplers.h:8, 2023-05-06T15:00:06.1140764Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets/chunk.h:7, 2023-05-06T15:00:06.1141694Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data/datasets.h:4, 2023-05-06T15:00:06.1142564Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/data.h:4, 2023-05-06T15:00:06.1143664Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include/torch/all.h:9, 2023-05-06T15:00:06.1144524Z from /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/extension.h:4, 2023-05-06T15:00:06.1145304Z from /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:1: 2023-05-06T15:00:06.1146581Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::attr::pin_memory’ 2023-05-06T15:00:06.1147135Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1147461Z ^ 2023-05-06T15:00:06.1148452Z /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/ATen/core/interned_strings.h:353:1: note: ‘c10::aten::pin_memory’ 2023-05-06T15:00:06.1148981Z FORALL_NS_SYMBOLS(DEFINE_SYMBOL) 2023-05-06T15:00:06.1149304Z ^ 2023-05-06T15:00:06.1150322Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:72: error: ‘False’ was not declared in this scope 2023-05-06T15:00:06.1151223Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1151655Z ^~~~~ 2023-05-06T15:00:06.1152625Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:34:72: note: suggested alternative: ‘pause’ 2023-05-06T15:00:06.1153470Z auto buf0 = at::randperm(64, device=device(type='cpu'), pin_memory=False); 2023-05-06T15:00:06.1153871Z ^~~~~ 2023-05-06T15:00:06.1154257Z pause 2023-05-06T15:00:06.1155366Z /var/lib/jenkins/.cache/torch_extensions/py310_cu118/inline_extension_clrtjg4jpfsmfsnce4be5rwof6uhl2fvnyrx5b7kmes4zri6piy7/main.cpp:35:17: error: unable to deduce ‘auto’ from ‘buf0’ 2023-05-06T15:00:06.1155977Z auto buf1 = buf0; 2023-05-06T15:00:06.1156237Z ^~~~ 2023-05-06T15:00:06.1156555Z ninja: build stopped: subcommand failed. 2023-05-06T15:00:06.1156952Z 2023-05-06T15:00:06.1156959Z 2023-05-06T15:00:06.1156966Z 2023-05-06T15:00:06.1157197Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:00:06.1157569Z import torch._dynamo 2023-05-06T15:00:06.1157979Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:00:06.1158228Z 2023-05-06T15:00:08.9801974Z ERROR 2023-05-06T15:00:49.1112420Z cuda eval nvidia_deeprecommender 0.994x 2023-05-06T15:00:53.8849167Z cuda eval opacus_cifar10 [2023-05-06 15:00:53,883] torch._dynamo.output_graph: [WARNING] nn.Module forward/_pre hooks are only partially supported, and were detected in your model. In particular, if you do not change/remove hooks after calling .compile(), you can disregard this warning, and otherwise you may need to set torch._dynamo.config.skip_nnmodule_hook_guards=False to ensure recompiling after changing hooks.See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T15:00:53.8851295Z [2023-05-06 15:00:53,883] torch._dynamo.output_graph: [WARNING] nn.Module state_dict and backward hooks are not yet supported by torch.compile, but were detected in your model and will be silently ignored. See https://pytorch.org/docs/master/compile/nn-module.html for more information and limitations. 2023-05-06T15:06:29.3792891Z 0.755x 2023-05-06T15:07:29.5539585Z cuda eval phlippe_densenet 2.130x 2023-05-06T15:08:14.1169137Z cuda eval phlippe_resnet 1.687x 2023-05-06T15:08:15.1667933Z abs_latency gmean=0.00x mean=19.827x 2023-05-06T15:08:15.1668532Z compilation_latency mean=115.557 seconds 2023-05-06T15:08:15.1669296Z compression_ratio mean=0.853x 2023-05-06T15:08:15.1672026Z eager_peak_mem gmean=0.00x mean=1.138x 2023-05-06T15:08:15.1674332Z dynamo_peak_mem gmean=0.00x mean=1.096x 2023-05-06T15:08:15.1677888Z calls_captured gmean=0.00x mean=623.500x 2023-05-06T15:08:15.1681758Z unique_graphs gmean=0.00x mean=13.038x 2023-05-06T15:08:15.1683986Z graph_breaks gmean=0.00x mean=5.385x 2023-05-06T15:08:15.1686975Z unique_graph_breaks gmean=0.00x mean=1.077x 2023-05-06T15:08:15.7175612Z + for mode in inference training 2023-05-06T15:08:15.7176118Z + [[ inductor_torchbench_perf == *max_autotune* ]] 2023-05-06T15:08:15.7180525Z + python benchmarks/dynamo/torchbench.py --accuracy --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_no_cudagraphs_torchbench_amp_training_cuda_accuracy.csv 2023-05-06T15:08:29.8794358Z cuda train functorch_maml_omniglot pass 2023-05-06T15:09:07.2853533Z cuda train hf_Albert pass 2023-05-06T15:09:59.6364217Z cuda train hf_Bart pass 2023-05-06T15:10:41.7950458Z cuda train hf_Bert pass 2023-05-06T15:11:56.8352789Z cuda train hf_Bert_large pass 2023-05-06T15:14:26.7623745Z cuda train hf_BigBird pass 2023-05-06T15:14:54.5395010Z cuda train hf_DistilBert pass 2023-05-06T15:15:35.5276988Z cuda train hf_GPT2 pass 2023-05-06T15:15:57.2659027Z cuda train hf_GPT2_large pass_due_to_skip 2023-05-06T15:16:36.8097153Z cuda train hf_Longformer ERROR:common:backend='inductor' raised: 2023-05-06T15:16:36.8098026Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:16:36.8098347Z 2023-05-06T15:16:36.8098353Z 2023-05-06T15:16:36.8098549Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:16:36.8098830Z import torch._dynamo 2023-05-06T15:16:36.8099115Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:16:36.8099407Z Traceback (most recent call last): 2023-05-06T15:16:36.8099743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:16:36.8100150Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:16:36.8100787Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:16:36.8101110Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8101451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:16:36.8101829Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:16:36.8102224Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:16:36.8106806Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:16:36.8107882Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:16:36.8108846Z self.optimizer_zero_grad(mod) 2023-05-06T15:16:36.8109638Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:16:36.8110208Z pred = mod(*cloned_inputs) 2023-05-06T15:16:36.8111166Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:16:36.8111879Z return self._call_impl(*args, **kwargs) 2023-05-06T15:16:36.8112880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:16:36.8114172Z return forward_call(*args, **kwargs) 2023-05-06T15:16:36.8115116Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T15:16:36.8116009Z outputs = self.longformer( 2023-05-06T15:16:36.8116549Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:16:36.8117441Z return self._call_impl(*args, **kwargs) 2023-05-06T15:16:36.8117979Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:16:36.8118338Z return forward_call(*args, **kwargs) 2023-05-06T15:16:36.8118887Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T15:16:36.8119271Z encoder_outputs = self.encoder( 2023-05-06T15:16:36.8120027Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:16:36.8120407Z return self._call_impl(*args, **kwargs) 2023-05-06T15:16:36.8120913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:16:36.8121272Z return forward_call(*args, **kwargs) 2023-05-06T15:16:36.8121822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T15:16:36.8122230Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T15:16:36.8122768Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T15:16:36.8123149Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T15:16:36.8123688Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T15:16:36.8124074Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T15:16:36.8124593Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T15:16:36.8124935Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8125516Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T15:16:36.8125870Z return _compile( 2023-05-06T15:16:36.8126344Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:16:36.8126672Z r = func(*args, **kwargs) 2023-05-06T15:16:36.8127137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T15:16:36.8127518Z out_code = transform_code_object(code, transform) 2023-05-06T15:16:36.8128100Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T15:16:36.8128504Z transformations(instructions, code_options) 2023-05-06T15:16:36.8129030Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T15:16:36.8129358Z tracer.run() 2023-05-06T15:16:36.8129828Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T15:16:36.8130139Z super().run() 2023-05-06T15:16:36.8130605Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T15:16:36.8130929Z and self.step() 2023-05-06T15:16:36.8131388Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T15:16:36.8131734Z getattr(self, inst.opname)(inst) 2023-05-06T15:16:36.8132264Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T15:16:36.8132629Z self.output.compile_subgraph( 2023-05-06T15:16:36.8133289Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T15:16:36.8133712Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T15:16:36.8134085Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:16:36.8134379Z return func(*args, **kwds) 2023-05-06T15:16:36.8134911Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T15:16:36.8135338Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T15:16:36.8135823Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:16:36.8136153Z r = func(*args, **kwargs) 2023-05-06T15:16:36.8136840Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T15:16:36.8137315Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T15:16:36.8137924Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T15:16:36.8138313Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T15:16:36.8138862Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T15:16:36.8139249Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T15:16:36.8139771Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T15:16:36.8140110Z return compile_fx(*args, **kwargs) 2023-05-06T15:16:36.8140619Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T15:16:36.8140952Z return aot_autograd( 2023-05-06T15:16:36.8141430Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T15:16:36.8141832Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T15:16:36.8142397Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T15:16:36.8142793Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T15:16:36.8143286Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:16:36.8143614Z r = func(*args, **kwargs) 2023-05-06T15:16:36.8144150Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T15:16:36.8144599Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:16:36.8145240Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T15:16:36.8145683Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:16:36.8146293Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T15:16:36.8146718Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:16:36.8147342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T15:16:36.8147721Z fx_g = create_functionalized_graph( 2023-05-06T15:16:36.8148282Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T15:16:36.8148738Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T15:16:36.8149321Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T15:16:36.8149908Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T15:16:36.8150475Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:16:36.8150809Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8151293Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:16:36.8151632Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8152135Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T15:16:36.8152525Z graph = tracer.trace(root, concrete_args) 2023-05-06T15:16:36.8153112Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:16:36.8153437Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8153926Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:16:36.8154261Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8154851Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T15:16:36.8155215Z (self.create_arg(fn(*args)),), 2023-05-06T15:16:36.8155710Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T15:16:36.8156051Z tree_out = root_fn(*tree_args) 2023-05-06T15:16:36.8156543Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T15:16:36.8157185Z out = f(*tensors) 2023-05-06T15:16:36.8157703Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T15:16:36.8158102Z return functionalized_f_helper(primals, tangents) 2023-05-06T15:16:36.8158663Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T15:16:36.8159020Z f_outs = fn(*f_args) 2023-05-06T15:16:36.8159508Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T15:16:36.8159858Z backward_out = torch.autograd.grad( 2023-05-06T15:16:36.8160355Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T15:16:36.8160688Z return handle_torch_function( 2023-05-06T15:16:36.8161184Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T15:16:36.8161577Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T15:16:36.8162127Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T15:16:36.8162514Z return replace_fn(func)(*args, **kwargs) 2023-05-06T15:16:36.8162986Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T15:16:36.8163424Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:16:36.8163975Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:16:36.8164306Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8164819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T15:16:36.8165302Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T15:16:36.8165903Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T15:16:36.8166513Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T15:16:36.8167083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T15:16:36.8167438Z out = func(*args, **kwargs) 2023-05-06T15:16:36.8167897Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T15:16:36.8168223Z return self._op(*args, **kwargs or {}) 2023-05-06T15:16:36.8168705Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:16:36.8169031Z return fn(*args, **kwargs) 2023-05-06T15:16:36.8169640Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T15:16:36.8170035Z return self.dispatch(func, types, args, kwargs) 2023-05-06T15:16:36.8170570Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1314, in dispatch 2023-05-06T15:16:36.8170920Z r = func(*args, **kwargs) 2023-05-06T15:16:36.8171352Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T15:16:36.8171690Z return self._op(*args, **kwargs or {}) 2023-05-06T15:16:36.8172160Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T15:16:36.8172518Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T15:16:36.8173049Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T15:16:36.8173387Z raise ValueError(msg) 2023-05-06T15:16:36.8173805Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T15:16:36.8174240Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:16:36.8174504Z 2023-05-06T15:16:36.8174509Z 2023-05-06T15:16:36.8174675Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:16:36.8174963Z import torch._dynamo 2023-05-06T15:16:36.8175265Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:16:36.8175447Z 2023-05-06T15:16:36.8175618Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:16:36.8239980Z fail_to_run 2023-05-06T15:17:07.9228228Z cuda train hf_Reformer pass 2023-05-06T15:17:53.9846661Z cuda train hf_T5 pass 2023-05-06T15:19:18.2756260Z cuda train hf_T5_base pass 2023-05-06T15:19:38.1303710Z cuda train hf_T5_large pass_due_to_skip 2023-05-06T15:19:48.4039843Z cuda train lennard_jones pass 2023-05-06T15:19:52.9279741Z cuda train llama WARNING:common:fp64 golden ref were not generated for llama. Setting accuracy check to cosine 2023-05-06T15:19:53.0646648Z eager_1st_run_fail 2023-05-06T15:20:04.6887600Z cuda train maml_omniglot pass 2023-05-06T15:20:40.2749682Z cuda train mnasnet1_0 pass 2023-05-06T15:21:16.9257450Z cuda train mobilenet_v2 pass 2023-05-06T15:21:23.5322514Z Eager model failed to run 2023-05-06T15:21:23.5322872Z Traceback (most recent call last): 2023-05-06T15:21:23.5323243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T15:21:23.5323605Z self.model_iter_fn(model, example_inputs) 2023-05-06T15:21:23.5327937Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T15:21:23.5328511Z pred = mod(*cloned_inputs) 2023-05-06T15:21:23.5329888Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T15:21:23.5330282Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T15:21:23.5331907Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T15:21:23.5332514Z raise e 2023-05-06T15:21:23.5332976Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T15:21:23.5333374Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T15:21:23.5333923Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:21:23.5334297Z return self._call_impl(*args, **kwargs) 2023-05-06T15:21:23.5334935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:21:23.5335520Z return forward_call(*args, **kwargs) 2023-05-06T15:21:23.5335805Z File ".3", line 207, in forward 2023-05-06T15:21:23.5336170Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T15:21:23.5336786Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:21:23.5337141Z return self._call_impl(*args, **kwargs) 2023-05-06T15:21:23.5337639Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:21:23.5337992Z return forward_call(*args, **kwargs) 2023-05-06T15:21:23.5338493Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T15:21:23.5338876Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T15:21:23.5339197Z RuntimeError: expected scalar type Float but found Half 2023-05-06T15:21:23.5339396Z 2023-05-06T15:21:23.5339563Z The above exception was the direct cause of the following exception: 2023-05-06T15:21:23.5339770Z 2023-05-06T15:21:23.5339869Z Traceback (most recent call last): 2023-05-06T15:21:23.5340199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T15:21:23.5340567Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T15:21:23.5340936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T15:21:23.5341291Z self.validate_model(model, example_inputs) 2023-05-06T15:21:23.5341646Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T15:21:23.5342026Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T15:21:23.5342345Z NotImplementedError: Eager model failed to run 2023-05-06T15:21:23.5342523Z 2023-05-06T15:21:23.5342670Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T15:22:03.7656865Z cuda train mobilenet_v3_large pass 2023-05-06T15:22:13.1096059Z cuda train moco [2023-05-06 15:22:13,108] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T15:22:55.1367660Z [2023-05-06 15:22:55,134] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T15:22:55.1368416Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T15:22:55.1369841Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T15:22:55.3337697Z [2023-05-06 15:22:55,333] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T15:23:04.9561419Z ERROR:common:element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T15:23:04.9562088Z Traceback (most recent call last): 2023-05-06T15:23:04.9562647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:23:04.9563246Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:23:04.9564887Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:23:04.9565421Z return fn(*args, **kwargs) 2023-05-06T15:23:04.9565965Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:23:04.9566535Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:23:04.9567147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:23:04.9567698Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:23:04.9568289Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:23:04.9569146Z self.optimizer_zero_grad(mod) 2023-05-06T15:23:04.9569764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:23:04.9570429Z pred = mod(*cloned_inputs) 2023-05-06T15:23:04.9570991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 399, in 2023-05-06T15:23:04.9571538Z loss = self.compute_loss(pred) 2023-05-06T15:23:04.9572107Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in 2023-05-06T15:23:04.9572678Z self.grad_scaler.scale(loss).backward() 2023-05-06T15:23:04.9573504Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T15:23:04.9574004Z torch.autograd.backward( 2023-05-06T15:23:04.9574776Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T15:23:04.9575482Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:23:04.9576097Z RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T15:23:04.9576684Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:23:04.9708927Z fail_to_run 2023-05-06T15:23:21.6357566Z cuda train nvidia_deeprecommender pass 2023-05-06T15:23:26.5744997Z Eager model failed to run 2023-05-06T15:23:26.5754489Z Traceback (most recent call last): 2023-05-06T15:23:26.5755210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T15:23:26.5755717Z self.model_iter_fn(model, example_inputs) 2023-05-06T15:23:26.5756300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T15:23:26.5760214Z self.grad_scaler.scale(loss).backward() 2023-05-06T15:23:26.5761841Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T15:23:26.5762344Z torch.autograd.backward( 2023-05-06T15:23:26.5762875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T15:23:26.5763310Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:23:26.5763910Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 69, in __call__ 2023-05-06T15:23:26.5764269Z return self.hook(module, *args, **kwargs) 2023-05-06T15:23:26.5764845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook 2023-05-06T15:23:26.5765268Z activations, backprops = self.rearrange_grad_samples( 2023-05-06T15:23:26.5765917Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 374, in rearrange_grad_samples 2023-05-06T15:23:26.5766285Z raise ValueError( 2023-05-06T15:23:26.5766793Z ValueError: No activations detected for , run forward after add_hooks(model) 2023-05-06T15:23:26.5767658Z 2023-05-06T15:23:26.5767829Z The above exception was the direct cause of the following exception: 2023-05-06T15:23:26.5768036Z 2023-05-06T15:23:26.5768152Z Traceback (most recent call last): 2023-05-06T15:23:26.5768485Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T15:23:26.5768839Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T15:23:26.5769219Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T15:23:26.5769570Z self.validate_model(model, example_inputs) 2023-05-06T15:23:26.5769928Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T15:23:26.5770450Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T15:23:26.5770791Z NotImplementedError: Eager model failed to run 2023-05-06T15:23:26.5770972Z 2023-05-06T15:23:26.5771107Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T15:24:00.8597687Z cuda train phlippe_densenet pass 2023-05-06T15:24:19.1481645Z cuda train phlippe_resnet pass 2023-05-06T15:24:20.4234985Z accuracy pass_rate=79.17% 2023-05-06T15:24:20.4235842Z calls_captured gmean=0.00x mean=505.542x 2023-05-06T15:24:20.4236229Z unique_graphs gmean=0.00x mean=9.625x 2023-05-06T15:24:20.4237658Z graph_breaks gmean=0.00x mean=15.333x 2023-05-06T15:24:20.4240610Z unique_graph_breaks gmean=0.00x mean=5.875x 2023-05-06T15:24:21.0141020Z + python benchmarks/dynamo/torchbench.py --accuracy --training --amp --backend inductor --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_with_cudagraphs_torchbench_amp_training_cuda_accuracy.csv 2023-05-06T15:24:34.6504641Z cuda train functorch_maml_omniglot pass 2023-05-06T15:25:11.2047262Z cuda train hf_Albert pass 2023-05-06T15:26:02.0897852Z cuda train hf_Bart pass 2023-05-06T15:26:44.1019877Z cuda train hf_Bert pass 2023-05-06T15:27:58.4703071Z cuda train hf_Bert_large pass 2023-05-06T15:31:07.5571003Z cuda train hf_BigBird pass 2023-05-06T15:31:34.7280852Z cuda train hf_DistilBert pass 2023-05-06T15:32:13.8721210Z cuda train hf_GPT2 pass 2023-05-06T15:32:35.3018593Z cuda train hf_GPT2_large pass_due_to_skip 2023-05-06T15:33:13.6019785Z cuda train hf_Longformer ERROR:common:backend='inductor' raised: 2023-05-06T15:33:13.6020324Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:33:13.6020586Z 2023-05-06T15:33:13.6020593Z 2023-05-06T15:33:13.6020760Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:33:13.6021086Z import torch._dynamo 2023-05-06T15:33:13.6021353Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:33:13.6021639Z Traceback (most recent call last): 2023-05-06T15:33:13.6021984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:33:13.6022359Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:33:13.6023207Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:33:13.6023838Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6024385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:33:13.6025146Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:33:13.6025853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:33:13.6026222Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:33:13.6027226Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:33:13.6027576Z self.optimizer_zero_grad(mod) 2023-05-06T15:33:13.6027949Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:33:13.6028336Z pred = mod(*cloned_inputs) 2023-05-06T15:33:13.6028928Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:33:13.6029288Z return self._call_impl(*args, **kwargs) 2023-05-06T15:33:13.6029790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:33:13.6030307Z return forward_call(*args, **kwargs) 2023-05-06T15:33:13.6030865Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T15:33:13.6031260Z outputs = self.longformer( 2023-05-06T15:33:13.6031769Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:33:13.6032134Z return self._call_impl(*args, **kwargs) 2023-05-06T15:33:13.6032618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:33:13.6032967Z return forward_call(*args, **kwargs) 2023-05-06T15:33:13.6033520Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T15:33:13.6033892Z encoder_outputs = self.encoder( 2023-05-06T15:33:13.6034433Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:33:13.6034870Z return self._call_impl(*args, **kwargs) 2023-05-06T15:33:13.6035370Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:33:13.6035712Z return forward_call(*args, **kwargs) 2023-05-06T15:33:13.6036260Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T15:33:13.6037031Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T15:33:13.6037895Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T15:33:13.6038289Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T15:33:13.6038820Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T15:33:13.6039223Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T15:33:13.6039732Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T15:33:13.6040076Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6040588Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T15:33:13.6040953Z return _compile( 2023-05-06T15:33:13.6041616Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:33:13.6042069Z r = func(*args, **kwargs) 2023-05-06T15:33:13.6042603Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T15:33:13.6042966Z out_code = transform_code_object(code, transform) 2023-05-06T15:33:13.6043547Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T15:33:13.6043960Z transformations(instructions, code_options) 2023-05-06T15:33:13.6044465Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T15:33:13.6045103Z tracer.run() 2023-05-06T15:33:13.6045581Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T15:33:13.6045906Z super().run() 2023-05-06T15:33:13.6046357Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T15:33:13.6046679Z and self.step() 2023-05-06T15:33:13.6047150Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T15:33:13.6047487Z getattr(self, inst.opname)(inst) 2023-05-06T15:33:13.6048129Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T15:33:13.6048501Z self.output.compile_subgraph( 2023-05-06T15:33:13.6049018Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T15:33:13.6049432Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T15:33:13.6049802Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:33:13.6050102Z return func(*args, **kwds) 2023-05-06T15:33:13.6050617Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T15:33:13.6051007Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T15:33:13.6051508Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:33:13.6051834Z r = func(*args, **kwargs) 2023-05-06T15:33:13.6052326Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T15:33:13.6052753Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T15:33:13.6053322Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T15:33:13.6053705Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T15:33:13.6054249Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T15:33:13.6054629Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T15:33:13.6055204Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T15:33:13.6055551Z return compile_fx(*args, **kwargs) 2023-05-06T15:33:13.6056055Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T15:33:13.6056392Z return aot_autograd( 2023-05-06T15:33:13.6056871Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T15:33:13.6057274Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T15:33:13.6057836Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T15:33:13.6058226Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T15:33:13.6058718Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:33:13.6059044Z r = func(*args, **kwargs) 2023-05-06T15:33:13.6059617Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T15:33:13.6060076Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:33:13.6060666Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T15:33:13.6061211Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:33:13.6061820Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T15:33:13.6062245Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:33:13.6062829Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T15:33:13.6063202Z fx_g = create_functionalized_graph( 2023-05-06T15:33:13.6063758Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T15:33:13.6064300Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T15:33:13.6064959Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T15:33:13.6065433Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T15:33:13.6066012Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:33:13.6066328Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6066809Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:33:13.6067143Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6067641Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T15:33:13.6068027Z graph = tracer.trace(root, concrete_args) 2023-05-06T15:33:13.6068520Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:33:13.6068881Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6069358Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:33:13.6069694Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6070162Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T15:33:13.6070484Z (self.create_arg(fn(*args)),), 2023-05-06T15:33:13.6070976Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T15:33:13.6071311Z tree_out = root_fn(*tree_args) 2023-05-06T15:33:13.6071797Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T15:33:13.6072130Z out = f(*tensors) 2023-05-06T15:33:13.6072628Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T15:33:13.6073016Z return functionalized_f_helper(primals, tangents) 2023-05-06T15:33:13.6073574Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T15:33:13.6073926Z f_outs = fn(*f_args) 2023-05-06T15:33:13.6074460Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T15:33:13.6074860Z backward_out = torch.autograd.grad( 2023-05-06T15:33:13.6075345Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T15:33:13.6075676Z return handle_torch_function( 2023-05-06T15:33:13.6076167Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T15:33:13.6076556Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T15:33:13.6077393Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T15:33:13.6077989Z return replace_fn(func)(*args, **kwargs) 2023-05-06T15:33:13.6078480Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T15:33:13.6078908Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:33:13.6079465Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:33:13.6079792Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6080299Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T15:33:13.6080715Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T15:33:13.6081435Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T15:33:13.6081860Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T15:33:13.6082414Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T15:33:13.6082766Z out = func(*args, **kwargs) 2023-05-06T15:33:13.6083217Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T15:33:13.6083554Z return self._op(*args, **kwargs or {}) 2023-05-06T15:33:13.6084014Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:33:13.6084337Z return fn(*args, **kwargs) 2023-05-06T15:33:13.6084916Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T15:33:13.6085299Z return self.dispatch(func, types, args, kwargs) 2023-05-06T15:33:13.6085821Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1314, in dispatch 2023-05-06T15:33:13.6086162Z r = func(*args, **kwargs) 2023-05-06T15:33:13.6086589Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T15:33:13.6086923Z return self._op(*args, **kwargs or {}) 2023-05-06T15:33:13.6087393Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T15:33:13.6087762Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T15:33:13.6088281Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T15:33:13.6088617Z raise ValueError(msg) 2023-05-06T15:33:13.6089027Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T15:33:13.6089466Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:33:13.6089764Z 2023-05-06T15:33:13.6089782Z 2023-05-06T15:33:13.6090818Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:33:13.6091235Z import torch._dynamo 2023-05-06T15:33:13.6091526Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:33:13.6091715Z 2023-05-06T15:33:13.6091877Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:33:13.6149528Z fail_to_run 2023-05-06T15:33:24.8014120Z cuda train hf_Reformer [2023-05-06 15:33:24,800] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:29.2852541Z [2023-05-06 15:33:29,284] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:31.7109319Z [2023-05-06 15:33:31,709] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:33.1638187Z [2023-05-06 15:33:33,163] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:35.3420478Z [2023-05-06 15:33:35,341] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:36.7258162Z [2023-05-06 15:33:36,725] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:38.6762199Z [2023-05-06 15:33:38,675] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:33:48.2689578Z pass 2023-05-06T15:34:33.1318907Z cuda train hf_T5 pass 2023-05-06T15:35:55.3541024Z cuda train hf_T5_base pass 2023-05-06T15:36:14.5543822Z cuda train hf_T5_large pass_due_to_skip 2023-05-06T15:36:24.6071897Z cuda train lennard_jones pass 2023-05-06T15:36:28.9737857Z cuda train llama WARNING:common:fp64 golden ref were not generated for llama. Setting accuracy check to cosine 2023-05-06T15:36:29.1080564Z eager_1st_run_fail 2023-05-06T15:36:40.0019691Z cuda train maml_omniglot pass 2023-05-06T15:37:13.4646617Z cuda train mnasnet1_0 pass 2023-05-06T15:37:48.4854483Z cuda train mobilenet_v2 pass 2023-05-06T15:37:54.8953189Z Eager model failed to run 2023-05-06T15:37:54.8953604Z Traceback (most recent call last): 2023-05-06T15:37:54.8954089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T15:37:54.8954657Z self.model_iter_fn(model, example_inputs) 2023-05-06T15:37:54.8955472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T15:37:54.8956094Z pred = mod(*cloned_inputs) 2023-05-06T15:37:54.8957553Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T15:37:54.8958197Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T15:37:54.8958722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T15:37:54.8959056Z raise e 2023-05-06T15:37:54.8959514Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T15:37:54.8959916Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T15:37:54.8960462Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:37:54.8960836Z return self._call_impl(*args, **kwargs) 2023-05-06T15:37:54.8961336Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:37:54.8961677Z return forward_call(*args, **kwargs) 2023-05-06T15:37:54.8962684Z File ".3", line 207, in forward 2023-05-06T15:37:54.8963185Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T15:37:54.8963961Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:37:54.8964360Z return self._call_impl(*args, **kwargs) 2023-05-06T15:37:54.8965072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:37:54.8965544Z return forward_call(*args, **kwargs) 2023-05-06T15:37:54.8966096Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T15:37:54.8966483Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T15:37:54.8966789Z RuntimeError: expected scalar type Float but found Half 2023-05-06T15:37:54.8966981Z 2023-05-06T15:37:54.8967153Z The above exception was the direct cause of the following exception: 2023-05-06T15:37:54.8967359Z 2023-05-06T15:37:54.8967480Z Traceback (most recent call last): 2023-05-06T15:37:54.8967811Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T15:37:54.8968578Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T15:37:54.8968968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T15:37:54.8969331Z self.validate_model(model, example_inputs) 2023-05-06T15:37:54.8969680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T15:37:54.8970064Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T15:37:54.8970399Z NotImplementedError: Eager model failed to run 2023-05-06T15:37:54.8970579Z 2023-05-06T15:37:54.8970728Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T15:38:32.5767818Z cuda train mobilenet_v3_large pass 2023-05-06T15:38:41.6184256Z cuda train moco [2023-05-06 15:38:41,617] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T15:39:22.0867230Z [2023-05-06 15:39:22,084] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T15:39:22.0868401Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T15:39:22.0870024Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T15:39:22.2784555Z [2023-05-06 15:39:22,277] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T15:39:22.2816706Z [2023-05-06 15:39:22,281] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T15:39:31.0617621Z [2023-05-06 15:39:31,060] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T15:39:31.1208199Z ERROR:common:element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T15:39:31.1208565Z Traceback (most recent call last): 2023-05-06T15:39:31.1208933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:39:31.1209331Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:39:31.1210407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:39:31.1210747Z return fn(*args, **kwargs) 2023-05-06T15:39:31.1211166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:39:31.1211524Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:39:31.1214796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:39:31.1215474Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:39:31.1216276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:39:31.1216844Z self.optimizer_zero_grad(mod) 2023-05-06T15:39:31.1217607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:39:31.1218075Z pred = mod(*cloned_inputs) 2023-05-06T15:39:31.1218466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 399, in 2023-05-06T15:39:31.1218825Z loss = self.compute_loss(pred) 2023-05-06T15:39:31.1219184Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in 2023-05-06T15:39:31.1219562Z self.grad_scaler.scale(loss).backward() 2023-05-06T15:39:31.1220192Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T15:39:31.1220520Z torch.autograd.backward( 2023-05-06T15:39:31.1221031Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T15:39:31.1221473Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:39:31.1222246Z RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T15:39:31.1222617Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:39:31.1352702Z fail_to_run 2023-05-06T15:39:47.6032063Z cuda train nvidia_deeprecommender pass 2023-05-06T15:39:52.3208666Z Eager model failed to run 2023-05-06T15:39:52.3220331Z Traceback (most recent call last): 2023-05-06T15:39:52.3223616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T15:39:52.3224257Z self.model_iter_fn(model, example_inputs) 2023-05-06T15:39:52.3224944Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T15:39:52.3226241Z self.grad_scaler.scale(loss).backward() 2023-05-06T15:39:52.3227293Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T15:39:52.3228375Z torch.autograd.backward( 2023-05-06T15:39:52.3229612Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T15:39:52.3230476Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:39:52.3231186Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 69, in __call__ 2023-05-06T15:39:52.3231543Z return self.hook(module, *args, **kwargs) 2023-05-06T15:39:52.3232122Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook 2023-05-06T15:39:52.3232570Z activations, backprops = self.rearrange_grad_samples( 2023-05-06T15:39:52.3233593Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 374, in rearrange_grad_samples 2023-05-06T15:39:52.3234035Z raise ValueError( 2023-05-06T15:39:52.3234577Z ValueError: No activations detected for , run forward after add_hooks(model) 2023-05-06T15:39:52.3234947Z 2023-05-06T15:39:52.3235132Z The above exception was the direct cause of the following exception: 2023-05-06T15:39:52.3235337Z 2023-05-06T15:39:52.3235455Z Traceback (most recent call last): 2023-05-06T15:39:52.3235780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T15:39:52.3236145Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T15:39:52.3236535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T15:39:52.3237197Z self.validate_model(model, example_inputs) 2023-05-06T15:39:52.3237577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T15:39:52.3237967Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T15:39:52.3238310Z NotImplementedError: Eager model failed to run 2023-05-06T15:39:52.3238477Z 2023-05-06T15:39:52.3238603Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T15:40:24.4856696Z cuda train phlippe_densenet pass 2023-05-06T15:40:41.8375365Z cuda train phlippe_resnet pass 2023-05-06T15:40:42.9568215Z accuracy pass_rate=79.17% 2023-05-06T15:40:42.9568785Z calls_captured gmean=0.00x mean=505.542x 2023-05-06T15:40:42.9569092Z unique_graphs gmean=0.00x mean=9.625x 2023-05-06T15:40:42.9570110Z graph_breaks gmean=0.00x mean=15.333x 2023-05-06T15:40:42.9572010Z unique_graph_breaks gmean=0.00x mean=5.875x 2023-05-06T15:40:43.4993535Z + python benchmarks/dynamo/torchbench.py --accuracy --training --amp --backend inductor --dynamic-shapes --dynamic-batch-only --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_dynamic_torchbench_amp_training_cuda_accuracy.csv 2023-05-06T15:40:56.8985474Z cuda train functorch_maml_omniglot pass 2023-05-06T15:41:50.0573989Z cuda train hf_Albert [2023-05-06 15:41:50,055] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15360000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:41:58.8986289Z pass 2023-05-06T15:43:13.5628296Z cuda train hf_Bart pass 2023-05-06T15:44:10.1863438Z cuda train hf_Bert [2023-05-06 15:44:10,183] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:44:14.1175131Z pass 2023-05-06T15:45:57.5747456Z cuda train hf_Bert_large [2023-05-06 15:45:57,571] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:46:05.0834907Z pass 2023-05-06T15:46:26.6086900Z cuda train hf_BigBird ERROR:common:backend='inductor' raised: 2023-05-06T15:46:26.6087604Z AssertionError: -1/2 2023-05-06T15:46:26.6087834Z 2023-05-06T15:46:26.6087843Z 2023-05-06T15:46:26.6088127Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:46:26.6088566Z import torch._dynamo 2023-05-06T15:46:26.6088990Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:46:26.6089428Z Traceback (most recent call last): 2023-05-06T15:46:26.6089936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:46:26.6090610Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:46:26.6091440Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:46:26.6091956Z return fn(*args, **kwargs) 2023-05-06T15:46:26.6092447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:46:26.6093023Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:46:26.6093625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:46:26.6094156Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:46:26.6094816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:46:26.6095369Z self.optimizer_zero_grad(mod) 2023-05-06T15:46:26.6095935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:46:26.6096459Z pred = mod(*cloned_inputs) 2023-05-06T15:46:26.6097285Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6097849Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6098618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6099143Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6099987Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2455, in forward 2023-05-06T15:46:26.6100610Z outputs = self.bert( 2023-05-06T15:46:26.6101347Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6101909Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6102667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6103200Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6104011Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2103, in forward 2023-05-06T15:46:26.6104536Z to_mask = None 2023-05-06T15:46:26.6105819Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6106348Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6107108Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6107634Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6108441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1632, in forward 2023-05-06T15:46:26.6108983Z layer_outputs = layer_module( 2023-05-06T15:46:26.6109750Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6110729Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6111544Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6112097Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6112925Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1484, in forward 2023-05-06T15:46:26.6113525Z self_attention_outputs = self.attention( 2023-05-06T15:46:26.6114317Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6114877Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6115636Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6116153Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6117391Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1397, in forward 2023-05-06T15:46:26.6117942Z self_outputs = self.self( 2023-05-06T15:46:26.6118731Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:46:26.6119280Z return self._call_impl(*args, **kwargs) 2023-05-06T15:46:26.6120046Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:46:26.6120634Z return forward_call(*args, **kwargs) 2023-05-06T15:46:26.6121441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 470, in forward 2023-05-06T15:46:26.6122090Z context_layer, attention_probs = self.bigbird_block_sparse_attention( 2023-05-06T15:46:26.6122922Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T15:46:26.6123526Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T15:46:26.6124317Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T15:46:26.6124924Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T15:46:26.6125722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T15:46:26.6126202Z return fn(*args, **kwargs) 2023-05-06T15:46:26.6126993Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T15:46:26.6127507Z return _compile( 2023-05-06T15:46:26.6128214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6128691Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6129414Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T15:46:26.6129992Z out_code = transform_code_object(code, transform) 2023-05-06T15:46:26.6130941Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T15:46:26.6131904Z transformations(instructions, code_options) 2023-05-06T15:46:26.6132704Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T15:46:26.6133195Z tracer.run() 2023-05-06T15:46:26.6133895Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T15:46:26.6134371Z super().run() 2023-05-06T15:46:26.6135083Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T15:46:26.6135566Z and self.step() 2023-05-06T15:46:26.6136524Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T15:46:26.6137077Z getattr(self, inst.opname)(inst) 2023-05-06T15:46:26.6137856Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T15:46:26.6138433Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T15:46:26.6139282Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T15:46:26.6139898Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T15:46:26.6140440Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:46:26.6140978Z return func(*args, **kwds) 2023-05-06T15:46:26.6141762Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T15:46:26.6142344Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T15:46:26.6143096Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6143578Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6144346Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T15:46:26.6145091Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T15:46:26.6145970Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T15:46:26.6146597Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T15:46:26.6147464Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T15:46:26.6148082Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T15:46:26.6148877Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T15:46:26.6149442Z return compile_fx(*args, **kwargs) 2023-05-06T15:46:26.6150220Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T15:46:26.6150819Z return aot_autograd( 2023-05-06T15:46:26.6151581Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T15:46:26.6152186Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T15:46:26.6153049Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T15:46:26.6153635Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T15:46:26.6154418Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6155107Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6155959Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T15:46:26.6156830Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:46:26.6158074Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T15:46:26.6158753Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:46:26.6159679Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T15:46:26.6160367Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:46:26.6161341Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T15:46:26.6161950Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T15:46:26.6162978Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6163486Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6164276Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T15:46:26.6164792Z return inner_compile( 2023-05-06T15:46:26.6165543Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T15:46:26.6166133Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T15:46:26.6166927Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T15:46:26.6167423Z return fn(*args, **kwargs) 2023-05-06T15:46:26.6167884Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:46:26.6168339Z return func(*args, **kwds) 2023-05-06T15:46:26.6169112Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T15:46:26.6169667Z compiled_fn = graph.compile_to_fn() 2023-05-06T15:46:26.6170455Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T15:46:26.6171071Z return self.compile_to_module().call 2023-05-06T15:46:26.6171829Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6172364Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6173117Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T15:46:26.6173646Z code, linemap = self.codegen() 2023-05-06T15:46:26.6174383Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 647, in codegen 2023-05-06T15:46:26.6174920Z return self.wrapper_code.generate() 2023-05-06T15:46:26.6175684Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:46:26.6176194Z r = func(*args, **kwargs) 2023-05-06T15:46:26.6176973Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 419, in generate 2023-05-06T15:46:26.6177541Z output_refs = self.get_output_refs() 2023-05-06T15:46:26.6178262Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 274, in wrapper 2023-05-06T15:46:26.6178788Z setattr(self, key, fn(self)) 2023-05-06T15:46:26.6179568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in get_output_refs 2023-05-06T15:46:26.6180180Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T15:46:26.6181094Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in 2023-05-06T15:46:26.6181712Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T15:46:26.6182538Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 2142, in codegen_reference 2023-05-06T15:46:26.6183378Z expr = pexpr(V.graph.sizevars.simplify(self.shape)) 2023-05-06T15:46:26.6184202Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint 2023-05-06T15:46:26.6184735Z return self._str(self._print(expr)) 2023-05-06T15:46:26.6185461Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print 2023-05-06T15:46:26.6185998Z return printmethod(expr, **kwargs) 2023-05-06T15:46:26.6186806Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 191, in _print_Pow 2023-05-06T15:46:26.6187339Z assert exp == int(exp), exp 2023-05-06T15:46:26.6188177Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T15:46:26.6188740Z AssertionError: -1/2 2023-05-06T15:46:26.6188969Z 2023-05-06T15:46:26.6188977Z 2023-05-06T15:46:26.6189225Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:46:26.6189666Z import torch._dynamo 2023-05-06T15:46:26.6190084Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:46:26.6190368Z 2023-05-06T15:46:26.6190712Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:46:26.6292963Z fail_to_run 2023-05-06T15:47:01.0018525Z cuda train hf_DistilBert [2023-05-06 15:47:00,999] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:47:03.6087640Z pass 2023-05-06T15:47:58.3107235Z cuda train hf_GPT2 [2023-05-06 15:47:58,308] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1179648*s0 - 1536 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:48:02.6035386Z pass 2023-05-06T15:48:24.4454934Z cuda train hf_GPT2_large pass_due_to_skip 2023-05-06T15:48:44.0742769Z cuda train hf_Longformer [2023-05-06 15:48:44,062] torch._dynamo.variables.torch: [WARNING] Calling on only torch.SymInt arguments is not yet supported. 2023-05-06T15:48:44.0743804Z To support this behavior, we need to allow const-propping tensors that store symint data. 2023-05-06T15:48:44.0744254Z For now, dynamo will explicitly graph break when it encounters user code with this behavior. 2023-05-06T15:48:44.0744492Z 2023-05-06T15:48:48.9318455Z ERROR:common:backend='inductor' raised: 2023-05-06T15:48:48.9318985Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:48:48.9319247Z 2023-05-06T15:48:48.9319257Z 2023-05-06T15:48:48.9319624Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:48:48.9319976Z import torch._dynamo 2023-05-06T15:48:48.9327074Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:48:48.9327837Z Traceback (most recent call last): 2023-05-06T15:48:48.9328424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T15:48:48.9328926Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T15:48:48.9329792Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:48:48.9330243Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9330793Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T15:48:48.9331319Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T15:48:48.9331943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T15:48:48.9332660Z cloned_inputs = clone_inputs(inputs) 2023-05-06T15:48:48.9333352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T15:48:48.9334475Z self.optimizer_zero_grad(mod) 2023-05-06T15:48:48.9335194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T15:48:48.9335818Z pred = mod(*cloned_inputs) 2023-05-06T15:48:48.9336900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:48:48.9337284Z return self._call_impl(*args, **kwargs) 2023-05-06T15:48:48.9337801Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:48:48.9338161Z return forward_call(*args, **kwargs) 2023-05-06T15:48:48.9339018Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T15:48:48.9339414Z outputs = self.longformer( 2023-05-06T15:48:48.9339934Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:48:48.9340320Z return self._call_impl(*args, **kwargs) 2023-05-06T15:48:48.9340809Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:48:48.9341302Z return forward_call(*args, **kwargs) 2023-05-06T15:48:48.9342306Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T15:48:48.9343005Z encoder_outputs = self.encoder( 2023-05-06T15:48:48.9343789Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:48:48.9344167Z return self._call_impl(*args, **kwargs) 2023-05-06T15:48:48.9345126Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:48:48.9345766Z return forward_call(*args, **kwargs) 2023-05-06T15:48:48.9346777Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T15:48:48.9347245Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T15:48:48.9348388Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1326, in 2023-05-06T15:48:48.9349086Z layer_outputs = layer_module( 2023-05-06T15:48:48.9350033Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:48:48.9350432Z return self._call_impl(*args, **kwargs) 2023-05-06T15:48:48.9350933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:48:48.9351284Z return forward_call(*args, **kwargs) 2023-05-06T15:48:48.9351784Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T15:48:48.9352175Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T15:48:48.9352692Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T15:48:48.9353090Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T15:48:48.9353679Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T15:48:48.9354013Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9354512Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T15:48:48.9355028Z return _compile( 2023-05-06T15:48:48.9355507Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:48:48.9355836Z r = func(*args, **kwargs) 2023-05-06T15:48:48.9356534Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T15:48:48.9357216Z out_code = transform_code_object(code, transform) 2023-05-06T15:48:48.9357799Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T15:48:48.9358203Z transformations(instructions, code_options) 2023-05-06T15:48:48.9358718Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T15:48:48.9359045Z tracer.run() 2023-05-06T15:48:48.9359510Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T15:48:48.9359984Z super().run() 2023-05-06T15:48:48.9360463Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T15:48:48.9360791Z and self.step() 2023-05-06T15:48:48.9361247Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T15:48:48.9361594Z getattr(self, inst.opname)(inst) 2023-05-06T15:48:48.9362112Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T15:48:48.9362462Z self.output.compile_subgraph( 2023-05-06T15:48:48.9362980Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T15:48:48.9363396Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T15:48:48.9363764Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T15:48:48.9364055Z return func(*args, **kwds) 2023-05-06T15:48:48.9364572Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T15:48:48.9365019Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T15:48:48.9365503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:48:48.9365830Z r = func(*args, **kwargs) 2023-05-06T15:48:48.9366330Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T15:48:48.9366760Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T15:48:48.9367313Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T15:48:48.9367717Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T15:48:48.9368267Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T15:48:48.9368635Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T15:48:48.9369159Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T15:48:48.9369514Z return compile_fx(*args, **kwargs) 2023-05-06T15:48:48.9370009Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T15:48:48.9370335Z return aot_autograd( 2023-05-06T15:48:48.9370822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T15:48:48.9371224Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T15:48:48.9371769Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T15:48:48.9372166Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T15:48:48.9372671Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T15:48:48.9373171Z r = func(*args, **kwargs) 2023-05-06T15:48:48.9373700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T15:48:48.9374158Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:48:48.9374747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T15:48:48.9375227Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:48:48.9375815Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T15:48:48.9376355Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T15:48:48.9376951Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T15:48:48.9377319Z fx_g = create_functionalized_graph( 2023-05-06T15:48:48.9377869Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T15:48:48.9378330Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T15:48:48.9378904Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T15:48:48.9379363Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T15:48:48.9379935Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:48:48.9380271Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9380753Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:48:48.9381079Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9381596Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T15:48:48.9381986Z graph = tracer.trace(root, concrete_args) 2023-05-06T15:48:48.9382466Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T15:48:48.9382795Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9383273Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T15:48:48.9383606Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9384060Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T15:48:48.9384401Z (self.create_arg(fn(*args)),), 2023-05-06T15:48:48.9384924Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T15:48:48.9385261Z tree_out = root_fn(*tree_args) 2023-05-06T15:48:48.9385764Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T15:48:48.9386101Z out = f(*tensors) 2023-05-06T15:48:48.9386576Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T15:48:48.9386972Z return functionalized_f_helper(primals, tangents) 2023-05-06T15:48:48.9387536Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T15:48:48.9387894Z f_outs = fn(*f_args) 2023-05-06T15:48:48.9388368Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T15:48:48.9388734Z backward_out = torch.autograd.grad( 2023-05-06T15:48:48.9389227Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T15:48:48.9389658Z return handle_torch_function( 2023-05-06T15:48:48.9390160Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T15:48:48.9390560Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T15:48:48.9391113Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T15:48:48.9391472Z return replace_fn(func)(*args, **kwargs) 2023-05-06T15:48:48.9391959Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T15:48:48.9392501Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T15:48:48.9393070Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:48:48.9393391Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9393913Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T15:48:48.9394332Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T15:48:48.9394938Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T15:48:48.9395361Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T15:48:48.9395914Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T15:48:48.9396262Z out = func(*args, **kwargs) 2023-05-06T15:48:48.9397025Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T15:48:48.9397379Z return self._op(*args, **kwargs or {}) 2023-05-06T15:48:48.9397875Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T15:48:48.9398190Z return fn(*args, **kwargs) 2023-05-06T15:48:48.9398699Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T15:48:48.9399088Z return self.dispatch(func, types, args, kwargs) 2023-05-06T15:48:48.9399616Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1269, in dispatch 2023-05-06T15:48:48.9399989Z return decomposition_table[func](*args, **kwargs) 2023-05-06T15:48:48.9400478Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T15:48:48.9400851Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T15:48:48.9401380Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T15:48:48.9401727Z raise ValueError(msg) 2023-05-06T15:48:48.9402139Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T15:48:48.9402585Z ValueError: Cannot view a tensor with shape torch.Size([4, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (48, 4, 256, 513)! 2023-05-06T15:48:48.9402828Z 2023-05-06T15:48:48.9402834Z 2023-05-06T15:48:48.9402998Z You can suppress this exception and fall back to eager by setting: 2023-05-06T15:48:48.9403286Z import torch._dynamo 2023-05-06T15:48:48.9403561Z torch._dynamo.config.suppress_errors = True 2023-05-06T15:48:48.9403740Z 2023-05-06T15:48:48.9403897Z TorchDynamo optimized model failed to run because of following error 2023-05-06T15:48:48.9461955Z fail_to_run 2023-05-06T15:49:22.4388472Z cuda train hf_Reformer [2023-05-06 15:49:22,436] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1048576*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.4431510Z [2023-05-06 15:49:22,442] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 2097152*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.7037040Z [2023-05-06 15:49:22,702] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 786432*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.7045931Z [2023-05-06 15:49:22,704] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1048576*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.7087060Z [2023-05-06 15:49:22,708] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 64*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.7263547Z [2023-05-06 15:49:22,725] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 262144*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:22.7420779Z [2023-05-06 15:49:22,741] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 4096*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:49:23.8705846Z pass 2023-05-06T15:50:37.0899930Z cuda train hf_T5 pass 2023-05-06T15:52:54.1161725Z cuda train hf_T5_base pass 2023-05-06T15:53:13.8919063Z cuda train hf_T5_large pass_due_to_skip 2023-05-06T15:53:24.4160271Z cuda train lennard_jones pass 2023-05-06T15:53:28.8331288Z cuda train llama WARNING:common:fp64 golden ref were not generated for llama. Setting accuracy check to cosine 2023-05-06T15:53:28.9692987Z eager_1st_run_fail 2023-05-06T15:53:40.1639207Z cuda train maml_omniglot [2023-05-06 15:53:40,162] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 5*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:53:41.5706211Z pass 2023-05-06T15:54:15.1994533Z cuda train mnasnet1_0 [2023-05-06 15:54:15,196] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:54:20.1407140Z pass 2023-05-06T15:54:58.5415939Z cuda train mobilenet_v2 [2023-05-06 15:54:58,539] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:55:03.1047035Z pass 2023-05-06T15:55:09.7117305Z Eager model failed to run 2023-05-06T15:55:09.7117753Z Traceback (most recent call last): 2023-05-06T15:55:09.7118353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T15:55:09.7118847Z self.model_iter_fn(model, example_inputs) 2023-05-06T15:55:09.7120172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T15:55:09.7120631Z pred = mod(*cloned_inputs) 2023-05-06T15:55:09.7122435Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T15:55:09.7123203Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T15:55:09.7124129Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T15:55:09.7124457Z raise e 2023-05-06T15:55:09.7124916Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T15:55:09.7125325Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T15:55:09.7125880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:55:09.7126323Z return self._call_impl(*args, **kwargs) 2023-05-06T15:55:09.7126863Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:55:09.7127449Z return forward_call(*args, **kwargs) 2023-05-06T15:55:09.7129026Z File ".3", line 207, in forward 2023-05-06T15:55:09.7129640Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T15:55:09.7130600Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T15:55:09.7131006Z return self._call_impl(*args, **kwargs) 2023-05-06T15:55:09.7131496Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T15:55:09.7131852Z return forward_call(*args, **kwargs) 2023-05-06T15:55:09.7132382Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T15:55:09.7132973Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T15:55:09.7133297Z RuntimeError: expected scalar type Float but found Half 2023-05-06T15:55:09.7133488Z 2023-05-06T15:55:09.7133672Z The above exception was the direct cause of the following exception: 2023-05-06T15:55:09.7133879Z 2023-05-06T15:55:09.7133991Z Traceback (most recent call last): 2023-05-06T15:55:09.7134306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T15:55:09.7134674Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T15:55:09.7135059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T15:55:09.7135396Z self.validate_model(model, example_inputs) 2023-05-06T15:55:09.7135753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T15:55:09.7136203Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T15:55:09.7136544Z NotImplementedError: Eager model failed to run 2023-05-06T15:55:09.7136709Z 2023-05-06T15:55:09.7136854Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T15:55:51.8331280Z cuda train mobilenet_v3_large [2023-05-06 15:55:51,830] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T15:55:58.8757088Z pass 2023-05-06T15:56:08.1801570Z cuda train moco [2023-05-06 15:56:08,179] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T16:14:11.2249781Z [2023-05-06 16:14:11,222] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T16:14:11.2250514Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T16:14:11.2251497Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T16:14:11.5273620Z [2023-05-06 16:14:11,526] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T16:14:25.0381362Z ERROR:common:backend='compile_fn' raised: 2023-05-06T16:14:25.0381813Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T16:14:25.0382020Z 2023-05-06T16:14:25.0382273Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T16:14:25.0382630Z Original traceback: 2023-05-06T16:14:25.0390051Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T16:14:25.0390730Z self.queue_ptr[0] = ptr 2023-05-06T16:14:25.0390980Z 2023-05-06T16:14:25.0390991Z 2023-05-06T16:14:25.0391000Z 2023-05-06T16:14:25.0391253Z You can suppress this exception and fall back to eager by setting: 2023-05-06T16:14:25.0392125Z import torch._dynamo 2023-05-06T16:14:25.0392621Z torch._dynamo.config.suppress_errors = True 2023-05-06T16:14:25.0392913Z Traceback (most recent call last): 2023-05-06T16:14:25.0393612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1448, in check_accuracy 2023-05-06T16:14:25.0393995Z new_result = optimized_model_iter_fn(model_copy, example_inputs) 2023-05-06T16:14:25.0394703Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T16:14:25.0395040Z return fn(*args, **kwargs) 2023-05-06T16:14:25.0395375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1291, in run_n_iterations 2023-05-06T16:14:25.0395742Z self.model_iter_fn(mod, inputs, collect_outputs=False) 2023-05-06T16:14:25.0396192Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T16:14:25.0396553Z cloned_inputs = clone_inputs(inputs) 2023-05-06T16:14:25.0397312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T16:14:25.0397677Z self.optimizer_zero_grad(mod) 2023-05-06T16:14:25.0398063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T16:14:25.0398409Z pred = mod(*cloned_inputs) 2023-05-06T16:14:25.0398948Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:14:25.0399310Z return self._call_impl(*args, **kwargs) 2023-05-06T16:14:25.0399822Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:14:25.0400175Z return forward_call(*args, **kwargs) 2023-05-06T16:14:25.0400667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T16:14:25.0401047Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T16:14:25.0401593Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T16:14:25.0402004Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T16:14:25.0402540Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:14:25.0402909Z return self._call_impl(*args, **kwargs) 2023-05-06T16:14:25.0403407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:14:25.0403743Z return forward_call(*args, **kwargs) 2023-05-06T16:14:25.0404114Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T16:14:25.0404508Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T16:14:25.0404934Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T16:14:25.0405315Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T16:14:25.0405722Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 139, in 2023-05-06T16:14:25.0406152Z k = self._batch_unshuffle_ddp(k, idx_unshuffle) 2023-05-06T16:14:25.0406536Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 158, in 2023-05-06T16:14:25.0406891Z self._dequeue_and_enqueue(k) 2023-05-06T16:14:25.0407397Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T16:14:25.0407745Z return func(*args, **kwargs) 2023-05-06T16:14:25.0408103Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 55, in _dequeue_and_enqueue 2023-05-06T16:14:25.0408470Z keys = concat_all_gather(keys) 2023-05-06T16:14:25.0408862Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 59, in 2023-05-06T16:14:25.0409360Z ptr = int(self.queue_ptr) 2023-05-06T16:14:25.0409857Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T16:14:25.0410261Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T16:14:25.0410806Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T16:14:25.0411255Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T16:14:25.0411762Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T16:14:25.0412098Z return fn(*args, **kwargs) 2023-05-06T16:14:25.0412710Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T16:14:25.0413042Z return _compile( 2023-05-06T16:14:25.0413512Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:14:25.0413846Z r = func(*args, **kwargs) 2023-05-06T16:14:25.0414326Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T16:14:25.0414698Z out_code = transform_code_object(code, transform) 2023-05-06T16:14:25.0415283Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T16:14:25.0415693Z transformations(instructions, code_options) 2023-05-06T16:14:25.0416240Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T16:14:25.0416570Z tracer.run() 2023-05-06T16:14:25.0417048Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T16:14:25.0417375Z super().run() 2023-05-06T16:14:25.0417832Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T16:14:25.0418154Z and self.step() 2023-05-06T16:14:25.0418625Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T16:14:25.0418960Z getattr(self, inst.opname)(inst) 2023-05-06T16:14:25.0419476Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T16:14:25.0419840Z self.output.compile_subgraph( 2023-05-06T16:14:25.0420355Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T16:14:25.0420762Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T16:14:25.0421139Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T16:14:25.0421435Z return func(*args, **kwds) 2023-05-06T16:14:25.0421958Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T16:14:25.0422351Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T16:14:25.0422850Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:14:25.0423175Z r = func(*args, **kwargs) 2023-05-06T16:14:25.0423663Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T16:14:25.0424088Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T16:14:25.0424649Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T16:14:25.0425040Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T16:14:25.0425587Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T16:14:25.0426136Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T16:14:25.0426681Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T16:14:25.0427049Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T16:14:25.0427568Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T16:14:25.0427920Z return compile_fx(*args, **kwargs) 2023-05-06T16:14:25.0428401Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T16:14:25.0428739Z return aot_autograd( 2023-05-06T16:14:25.0429361Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T16:14:25.0429759Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T16:14:25.0430317Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T16:14:25.0430711Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T16:14:25.0431216Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:14:25.0431529Z r = func(*args, **kwargs) 2023-05-06T16:14:25.0432058Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2959, in create_aot_dispatcher_function 2023-05-06T16:14:25.0432478Z fw_metadata = run_functionalized_fw_and_collect_metadata( 2023-05-06T16:14:25.0433003Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 719, in inner 2023-05-06T16:14:25.0433335Z flat_f_outs = f(*flat_f_args) 2023-05-06T16:14:25.0433845Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3259, in functional_call 2023-05-06T16:14:25.0434243Z out = Interpreter(mod).run(*args[params_len:], **kwargs) 2023-05-06T16:14:25.0434728Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T16:14:25.0435070Z self.env[node] = self.run_node(node) 2023-05-06T16:14:25.0435552Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T16:14:25.0435921Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T16:14:25.0436460Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 267, in call_function 2023-05-06T16:14:25.0436948Z return target(*args, **kwargs) 2023-05-06T16:14:25.0437468Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T16:14:25.0437826Z return replace_fn(func)(*args, **kwargs) 2023-05-06T16:14:25.0438317Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T16:14:25.0438780Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T16:14:25.0438978Z 2023-05-06T16:14:25.0439223Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T16:14:25.0439563Z Original traceback: 2023-05-06T16:14:25.0457716Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T16:14:25.0458117Z self.queue_ptr[0] = ptr 2023-05-06T16:14:25.0458268Z 2023-05-06T16:14:25.0458275Z 2023-05-06T16:14:25.0458280Z 2023-05-06T16:14:25.0458447Z You can suppress this exception and fall back to eager by setting: 2023-05-06T16:14:25.0458746Z import torch._dynamo 2023-05-06T16:14:25.0459028Z torch._dynamo.config.suppress_errors = True 2023-05-06T16:14:25.0459208Z 2023-05-06T16:14:25.0459379Z TorchDynamo optimized model failed to run because of following error 2023-05-06T16:14:25.0543363Z fail_to_run 2023-05-06T16:14:44.1543002Z cuda train nvidia_deeprecommender pass 2023-05-06T16:14:48.8430955Z Eager model failed to run 2023-05-06T16:14:48.8444527Z Traceback (most recent call last): 2023-05-06T16:14:48.8447305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T16:14:48.8447708Z self.model_iter_fn(model, example_inputs) 2023-05-06T16:14:48.8448099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T16:14:48.8448461Z self.grad_scaler.scale(loss).backward() 2023-05-06T16:14:48.8449867Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T16:14:48.8456181Z torch.autograd.backward( 2023-05-06T16:14:48.8457070Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T16:14:48.8457719Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T16:14:48.8458326Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 69, in __call__ 2023-05-06T16:14:48.8458695Z return self.hook(module, *args, **kwargs) 2023-05-06T16:14:48.8459258Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 326, in capture_backprops_hook 2023-05-06T16:14:48.8459694Z activations, backprops = self.rearrange_grad_samples( 2023-05-06T16:14:48.8460286Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 374, in rearrange_grad_samples 2023-05-06T16:14:48.8460638Z raise ValueError( 2023-05-06T16:14:48.8461151Z ValueError: No activations detected for , run forward after add_hooks(model) 2023-05-06T16:14:48.8461423Z 2023-05-06T16:14:48.8461592Z The above exception was the direct cause of the following exception: 2023-05-06T16:14:48.8461804Z 2023-05-06T16:14:48.8461917Z Traceback (most recent call last): 2023-05-06T16:14:48.8462235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T16:14:48.8462601Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T16:14:48.8462984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T16:14:48.8463321Z self.validate_model(model, example_inputs) 2023-05-06T16:14:48.8463680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T16:14:48.8464061Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T16:14:48.8464396Z NotImplementedError: Eager model failed to run 2023-05-06T16:14:48.8464563Z 2023-05-06T16:14:48.8464689Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T16:15:25.2668680Z cuda train phlippe_densenet [2023-05-06 16:15:25,264] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 10*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T16:15:30.6843502Z pass 2023-05-06T16:15:49.0341449Z cuda train phlippe_resnet [2023-05-06 16:15:49,032] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 10*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T16:15:51.7288035Z pass 2023-05-06T16:15:52.8361636Z accuracy pass_rate=75.00% 2023-05-06T16:15:52.8363585Z calls_captured gmean=0.00x mean=399.708x 2023-05-06T16:15:52.8366459Z unique_graphs gmean=0.00x mean=7.083x 2023-05-06T16:15:52.8368593Z graph_breaks gmean=0.00x mean=11.958x 2023-05-06T16:15:52.8372527Z unique_graph_breaks gmean=0.00x mean=5.333x 2023-05-06T16:15:53.3374107Z + [[ training == \i\n\f\e\r\e\n\c\e ]] 2023-05-06T16:15:53.3374638Z + [[ inductor_torchbench_perf == *max_autotune* ]] 2023-05-06T16:15:53.3378476Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_no_cudagraphs_torchbench_amp_training_cuda_performance.csv 2023-05-06T16:16:13.2294818Z cuda train functorch_maml_omniglot 0.929x 2023-05-06T16:17:03.7861906Z cuda train hf_Albert 2.282x 2023-05-06T16:17:35.7111930Z cuda train hf_Bart [2023-05-06 16:17:35,708] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:18:16.4531909Z 1.431x 2023-05-06T16:18:42.5432630Z cuda train hf_Bert [2023-05-06 16:18:42,541] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:19:10.7682900Z 1.387x 2023-05-06T16:19:56.2946020Z cuda train hf_Bert_large [2023-05-06 16:19:56,291] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:20:46.7553203Z 1.419x 2023-05-06T16:21:00.5775324Z cuda train hf_BigBird [2023-05-06 16:21:00,576] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:20.9059680Z [2023-05-06 16:21:20,904] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:22.0558461Z [2023-05-06 16:21:22,054] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:29.3325776Z [2023-05-06 16:21:29,331] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:29.9011390Z [2023-05-06 16:21:29,900] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:36.5540469Z [2023-05-06 16:21:36,552] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:37.1228111Z [2023-05-06 16:21:37,122] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:43.7906353Z [2023-05-06 16:21:43,789] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:44.3669643Z [2023-05-06 16:21:44,366] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:50.9969129Z [2023-05-06 16:21:50,996] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:51.5554641Z [2023-05-06 16:21:51,554] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:58.2143217Z [2023-05-06 16:21:58,213] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:21:58.7819043Z [2023-05-06 16:21:58,781] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:05.5612789Z [2023-05-06 16:22:05,560] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:06.1339407Z [2023-05-06 16:22:06,133] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:12.9686766Z [2023-05-06 16:22:12,967] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:13.5304828Z [2023-05-06 16:22:13,529] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:20.2412056Z [2023-05-06 16:22:20,240] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:20.8163117Z [2023-05-06 16:22:20,815] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:27.6166398Z [2023-05-06 16:22:27,615] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:28.1827115Z [2023-05-06 16:22:28,182] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:34.9597323Z [2023-05-06 16:22:34,958] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:35.5305851Z [2023-05-06 16:22:35,529] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:42.2921734Z [2023-05-06 16:22:42,291] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:22:42.8597510Z [2023-05-06 16:22:42,858] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:23:43.8488234Z 1.540x 2023-05-06T16:24:01.2696011Z cuda train hf_DistilBert [2023-05-06 16:24:01,267] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:24:25.1850522Z 1.527x 2023-05-06T16:24:49.9897479Z cuda train hf_GPT2 [2023-05-06 16:24:49,987] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:25:26.5962525Z 1.777x 2023-05-06T16:26:36.5497748Z cuda train hf_GPT2_large [2023-05-06 16:26:36,547] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:28:20.8293909Z 1.746x 2023-05-06T16:28:35.3472945Z cuda train hf_Longformer [2023-05-06 16:28:35,345] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:01.1399328Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T16:29:01.1399931Z Traceback (most recent call last): 2023-05-06T16:29:01.1400437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T16:29:01.1403889Z fn(model, example_inputs) 2023-05-06T16:29:01.1405694Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T16:29:01.1406079Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1406443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T16:29:01.1406810Z cloned_inputs = clone_inputs(inputs) 2023-05-06T16:29:01.1407408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T16:29:01.1407944Z self.optimizer_zero_grad(mod) 2023-05-06T16:29:01.1408398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T16:29:01.1409071Z pred = mod(*cloned_inputs) 2023-05-06T16:29:01.1409953Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:29:01.1410688Z return self._call_impl(*args, **kwargs) 2023-05-06T16:29:01.1411350Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:29:01.1411693Z return forward_call(*args, **kwargs) 2023-05-06T16:29:01.1412248Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T16:29:01.1412638Z outputs = self.longformer( 2023-05-06T16:29:01.1413180Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:29:01.1413546Z return self._call_impl(*args, **kwargs) 2023-05-06T16:29:01.1414044Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:29:01.1414392Z return forward_call(*args, **kwargs) 2023-05-06T16:29:01.1414929Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T16:29:01.1415345Z encoder_outputs = self.encoder( 2023-05-06T16:29:01.1415866Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:29:01.1416221Z return self._call_impl(*args, **kwargs) 2023-05-06T16:29:01.1417137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:29:01.1417495Z return forward_call(*args, **kwargs) 2023-05-06T16:29:01.1418047Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T16:29:01.1418454Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T16:29:01.1418989Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T16:29:01.1419376Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T16:29:01.1420052Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T16:29:01.1420460Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T16:29:01.1420983Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T16:29:01.1421327Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1421833Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T16:29:01.1422181Z return _compile( 2023-05-06T16:29:01.1422651Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:29:01.1423006Z r = func(*args, **kwargs) 2023-05-06T16:29:01.1423487Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T16:29:01.1423862Z out_code = transform_code_object(code, transform) 2023-05-06T16:29:01.1424441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T16:29:01.1424840Z transformations(instructions, code_options) 2023-05-06T16:29:01.1425456Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T16:29:01.1425792Z tracer.run() 2023-05-06T16:29:01.1426248Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T16:29:01.1426575Z super().run() 2023-05-06T16:29:01.1427042Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T16:29:01.1427363Z and self.step() 2023-05-06T16:29:01.1427820Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T16:29:01.1428165Z getattr(self, inst.opname)(inst) 2023-05-06T16:29:01.1428763Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T16:29:01.1429132Z self.output.compile_subgraph( 2023-05-06T16:29:01.1429645Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T16:29:01.1430065Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T16:29:01.1430432Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T16:29:01.1430732Z return func(*args, **kwds) 2023-05-06T16:29:01.1431238Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T16:29:01.1431628Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T16:29:01.1432126Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:29:01.1432438Z r = func(*args, **kwargs) 2023-05-06T16:29:01.1432941Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T16:29:01.1433363Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T16:29:01.1434097Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T16:29:01.1434485Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T16:29:01.1435071Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T16:29:01.1435458Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T16:29:01.1435969Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T16:29:01.1436323Z return compile_fx(*args, **kwargs) 2023-05-06T16:29:01.1437319Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T16:29:01.1437680Z return aot_autograd( 2023-05-06T16:29:01.1438184Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T16:29:01.1438593Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T16:29:01.1439441Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T16:29:01.1440039Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T16:29:01.1440825Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T16:29:01.1441338Z r = func(*args, **kwargs) 2023-05-06T16:29:01.1442176Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T16:29:01.1442874Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T16:29:01.1443827Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T16:29:01.1444674Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T16:29:01.1445665Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T16:29:01.1446339Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T16:29:01.1447238Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T16:29:01.1447818Z fx_g = create_functionalized_graph( 2023-05-06T16:29:01.1448657Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T16:29:01.1449488Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T16:29:01.1450444Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T16:29:01.1451191Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T16:29:01.1452196Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T16:29:01.1453204Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1454040Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T16:29:01.1454579Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1455541Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T16:29:01.1456187Z graph = tracer.trace(root, concrete_args) 2023-05-06T16:29:01.1457153Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T16:29:01.1457718Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1458798Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T16:29:01.1459271Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1459918Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T16:29:01.1460394Z (self.create_arg(fn(*args)),), 2023-05-06T16:29:01.1461077Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T16:29:01.1461555Z tree_out = root_fn(*tree_args) 2023-05-06T16:29:01.1462254Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T16:29:01.1462722Z out = f(*tensors) 2023-05-06T16:29:01.1463586Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T16:29:01.1464137Z return functionalized_f_helper(primals, tangents) 2023-05-06T16:29:01.1465053Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T16:29:01.1465559Z f_outs = fn(*f_args) 2023-05-06T16:29:01.1466242Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T16:29:01.1466735Z backward_out = torch.autograd.grad( 2023-05-06T16:29:01.1467425Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T16:29:01.1467897Z return handle_torch_function( 2023-05-06T16:29:01.1468574Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T16:29:01.1469143Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T16:29:01.1469908Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T16:29:01.1470444Z return replace_fn(func)(*args, **kwargs) 2023-05-06T16:29:01.1471172Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T16:29:01.1471792Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T16:29:01.1472572Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T16:29:01.1473034Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1473774Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T16:29:01.1474350Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T16:29:01.1475207Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T16:29:01.1475756Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T16:29:01.1476527Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T16:29:01.1477188Z out = func(*args, **kwargs) 2023-05-06T16:29:01.1477811Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T16:29:01.1478286Z return self._op(*args, **kwargs or {}) 2023-05-06T16:29:01.1478947Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T16:29:01.1479398Z return fn(*args, **kwargs) 2023-05-06T16:29:01.1480108Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T16:29:01.1480675Z return self.dispatch(func, types, args, kwargs) 2023-05-06T16:29:01.1481405Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1314, in dispatch 2023-05-06T16:29:01.1482134Z r = func(*args, **kwargs) 2023-05-06T16:29:01.1482762Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T16:29:01.1483235Z return self._op(*args, **kwargs or {}) 2023-05-06T16:29:01.1483893Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T16:29:01.1484397Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T16:29:01.1485206Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T16:29:01.1485679Z raise ValueError(msg) 2023-05-06T16:29:01.1486245Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T16:29:01.1487078Z ValueError: Cannot view a tensor with shape torch.Size([2, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (24, 4, 256, 513)! 2023-05-06T16:29:01.1487455Z 2023-05-06T16:29:01.1487463Z 2023-05-06T16:29:01.1487693Z You can suppress this exception and fall back to eager by setting: 2023-05-06T16:29:01.1488089Z import torch._dynamo 2023-05-06T16:29:01.1488452Z torch._dynamo.config.suppress_errors = True 2023-05-06T16:29:01.1488702Z 2023-05-06T16:29:02.4642157Z ERROR 2023-05-06T16:29:09.5664312Z cuda train hf_Reformer [2023-05-06 16:29:09,565] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:19.1497636Z [2023-05-06 16:29:19,148] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:21.2974587Z [2023-05-06 16:29:21,296] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:22.0326056Z [2023-05-06 16:29:22,031] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:23.4385679Z [2023-05-06 16:29:23,437] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:28.7834887Z [2023-05-06 16:29:28,782] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:29.2396871Z [2023-05-06 16:29:29,238] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:30.6407847Z [2023-05-06 16:29:30,639] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:30.9215679Z [2023-05-06 16:29:30,920] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:31.2566692Z [2023-05-06 16:29:31,255] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:32.0575623Z [2023-05-06 16:29:32,056] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:32.9919070Z [2023-05-06 16:29:32,991] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:33.2983690Z [2023-05-06 16:29:33,297] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:34.1869422Z [2023-05-06 16:29:34,186] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:34.4665701Z [2023-05-06 16:29:34,465] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:34.7765177Z [2023-05-06 16:29:34,775] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:35.5568588Z [2023-05-06 16:29:35,556] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:36.7090899Z [2023-05-06 16:29:36,708] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:37.0136168Z [2023-05-06 16:29:37,013] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:37.2517614Z [2023-05-06 16:29:37,251] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:29:49.4660122Z 1.077x 2023-05-06T16:30:18.6534923Z cuda train hf_T5 [2023-05-06 16:30:18,651] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:31:08.5778457Z 1.784x 2023-05-06T16:31:18.8665761Z Eager model failed to run 2023-05-06T16:31:18.8672179Z Traceback (most recent call last): 2023-05-06T16:31:18.8672707Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T16:31:18.8673310Z self.model_iter_fn(model, example_inputs) 2023-05-06T16:31:18.8673994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T16:31:18.8679148Z self.grad_scaler.scale(loss).backward() 2023-05-06T16:31:18.8681002Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T16:31:18.8681455Z torch.autograd.backward( 2023-05-06T16:31:18.8682000Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T16:31:18.8682426Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T16:31:18.8683306Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 384.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 235.06 MiB is free. Process 942537 has 39.16 GiB memory in use. Of the allocated memory 37.56 GiB is allocated by PyTorch, and 1.09 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T16:31:18.8684016Z 2023-05-06T16:31:18.8684194Z The above exception was the direct cause of the following exception: 2023-05-06T16:31:18.8684404Z 2023-05-06T16:31:18.8684516Z Traceback (most recent call last): 2023-05-06T16:31:18.8684856Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T16:31:18.8685233Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T16:31:18.8685598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T16:31:18.8686020Z self.validate_model(model, example_inputs) 2023-05-06T16:31:18.8686427Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T16:31:18.8686805Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T16:31:18.8687137Z NotImplementedError: Eager model failed to run 2023-05-06T16:31:18.8687321Z 2023-05-06T16:31:18.8687439Z WARNING:root:hf_T5_base failed to load 2023-05-06T16:33:11.2076536Z cuda train hf_T5_large [2023-05-06 16:33:11,205] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:35:47.9830202Z 1.725x 2023-05-06T16:36:02.6841239Z cuda train lennard_jones 0.837x 2023-05-06T16:36:06.4277770Z Test train is not implemented. 2023-05-06T16:36:06.4278533Z Traceback (most recent call last): 2023-05-06T16:36:06.4278893Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T16:36:06.4279256Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T16:36:06.4279644Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 300, in load_model 2023-05-06T16:36:06.4285239Z benchmark = benchmark_cls( 2023-05-06T16:36:06.4285747Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T16:36:06.4286110Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T16:36:06.4286520Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/llama/__init__.py", line 17, in __init__ 2023-05-06T16:36:06.4286948Z super().__init__(test=test, device=device, jit=jit, batch_size=batch_size, extra_args=extra_args) 2023-05-06T16:36:06.4288218Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 85, in __init__ 2023-05-06T16:36:06.4288583Z self.determine_batch_size(batch_size) 2023-05-06T16:36:06.4289171Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 218, in determine_batch_size 2023-05-06T16:36:06.4289853Z raise NotImplementedError(f"Test {self.test} is not implemented.") 2023-05-06T16:36:06.4290411Z NotImplementedError: Test train is not implemented. 2023-05-06T16:36:06.4290728Z 2023-05-06T16:36:06.4291074Z WARNING:root:llama failed to load 2023-05-06T16:36:24.3549771Z cuda train maml_omniglot 0.855x 2023-05-06T16:36:48.2375263Z cuda train mnasnet1_0 [2023-05-06 16:36:48,235] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:38:12.1914870Z 0.947x 2023-05-06T16:38:38.4943843Z cuda train mobilenet_v2 [2023-05-06 16:38:38,492] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:40:19.9904961Z 1.356x 2023-05-06T16:40:26.8990603Z Eager model failed to run 2023-05-06T16:40:26.8990923Z Traceback (most recent call last): 2023-05-06T16:40:26.8991309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T16:40:26.8991672Z self.model_iter_fn(model, example_inputs) 2023-05-06T16:40:26.8993781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T16:40:26.8994271Z pred = mod(*cloned_inputs) 2023-05-06T16:40:26.8995489Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T16:40:26.8996082Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T16:40:26.8996932Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T16:40:26.8997458Z raise e 2023-05-06T16:40:26.8998207Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T16:40:26.8998958Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T16:40:26.8999900Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:40:26.9000500Z return self._call_impl(*args, **kwargs) 2023-05-06T16:40:26.9001490Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:40:26.9002087Z return forward_call(*args, **kwargs) 2023-05-06T16:40:26.9002575Z File ".3", line 207, in forward 2023-05-06T16:40:26.9003229Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T16:40:26.9004383Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:40:26.9004980Z return self._call_impl(*args, **kwargs) 2023-05-06T16:40:26.9005922Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:40:26.9006506Z return forward_call(*args, **kwargs) 2023-05-06T16:40:26.9007379Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T16:40:26.9008066Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T16:40:26.9008590Z RuntimeError: expected scalar type Float but found Half 2023-05-06T16:40:26.9008944Z 2023-05-06T16:40:26.9009171Z The above exception was the direct cause of the following exception: 2023-05-06T16:40:26.9009550Z 2023-05-06T16:40:26.9009767Z Traceback (most recent call last): 2023-05-06T16:40:26.9010269Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T16:40:26.9011578Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T16:40:26.9012198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T16:40:26.9012776Z self.validate_model(model, example_inputs) 2023-05-06T16:40:26.9013430Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T16:40:26.9014082Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T16:40:26.9014615Z NotImplementedError: Eager model failed to run 2023-05-06T16:40:26.9014954Z 2023-05-06T16:40:26.9015205Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T16:40:52.0967902Z cuda train mobilenet_v3_large [2023-05-06 16:40:52,094] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:42:36.6763476Z 1.013x 2023-05-06T16:42:45.3415957Z cuda train moco [2023-05-06 16:42:45,339] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T16:43:54.0963624Z [2023-05-06 16:43:54,093] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T16:43:54.0964821Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T16:43:54.0966390Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T16:43:54.6381331Z [2023-05-06 16:43:54,637] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T16:44:32.7770301Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T16:44:32.7770783Z Traceback (most recent call last): 2023-05-06T16:44:32.7771337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T16:44:32.7771657Z fn(model, example_inputs) 2023-05-06T16:44:32.7772994Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T16:44:32.7773358Z return fn(*args, **kwargs) 2023-05-06T16:44:32.7773806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T16:44:32.7774285Z cloned_inputs = clone_inputs(inputs) 2023-05-06T16:44:32.7774674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T16:44:32.7775052Z self.optimizer_zero_grad(mod) 2023-05-06T16:44:32.7775415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T16:44:32.7775758Z pred = mod(*cloned_inputs) 2023-05-06T16:44:32.7776200Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 399, in 2023-05-06T16:44:32.7776559Z loss = self.compute_loss(pred) 2023-05-06T16:44:32.7776924Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in 2023-05-06T16:44:32.7777301Z self.grad_scaler.scale(loss).backward() 2023-05-06T16:44:32.7777810Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T16:44:32.7778137Z torch.autograd.backward( 2023-05-06T16:44:32.7778632Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T16:44:32.7779065Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T16:44:32.7779473Z RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T16:44:35.3216296Z ERROR 2023-05-06T16:44:46.0932304Z cuda train nvidia_deeprecommender [2023-05-06 16:44:46,091] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:44:51.1329738Z 1.016x 2023-05-06T16:44:55.7650101Z Eager model failed to run 2023-05-06T16:44:55.7658645Z Traceback (most recent call last): 2023-05-06T16:44:55.7659032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T16:44:55.7659396Z self.model_iter_fn(model, example_inputs) 2023-05-06T16:44:55.7662851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T16:44:55.7664770Z pred = mod(*cloned_inputs) 2023-05-06T16:44:55.7665702Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:44:55.7666177Z return self._call_impl(*args, **kwargs) 2023-05-06T16:44:55.7667368Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:44:55.7667729Z return forward_call(*args, **kwargs) 2023-05-06T16:44:55.7668470Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 148, in forward 2023-05-06T16:44:55.7668879Z return self._module(*args, **kwargs) 2023-05-06T16:44:55.7669428Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:44:55.7669791Z return self._call_impl(*args, **kwargs) 2023-05-06T16:44:55.7670293Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T16:44:55.7670650Z return forward_call(*args, **kwargs) 2023-05-06T16:44:55.7671134Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 285, in forward 2023-05-06T16:44:55.7671491Z return self._forward_impl(x) 2023-05-06T16:44:55.7672000Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 268, in _forward_impl 2023-05-06T16:44:55.7672345Z x = self.conv1(x) 2023-05-06T16:44:55.7672927Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T16:44:55.7673654Z return self._call_impl(*args, **kwargs) 2023-05-06T16:44:55.7674308Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1557, in _call_impl 2023-05-06T16:44:55.7674667Z hook_result = hook(self, args, result) 2023-05-06T16:44:55.7675230Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 288, in capture_activations_hook 2023-05-06T16:44:55.7675614Z p._forward_counter += 1 2023-05-06T16:44:55.7676077Z AttributeError: 'Parameter' object has no attribute '_forward_counter' 2023-05-06T16:44:55.7676294Z 2023-05-06T16:44:55.7676458Z The above exception was the direct cause of the following exception: 2023-05-06T16:44:55.7676859Z 2023-05-06T16:44:55.7676976Z Traceback (most recent call last): 2023-05-06T16:44:55.7677312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T16:44:55.7677690Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T16:44:55.7678061Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T16:44:55.7678412Z self.validate_model(model, example_inputs) 2023-05-06T16:44:55.7678772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T16:44:55.7679143Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T16:44:55.7679477Z NotImplementedError: Eager model failed to run 2023-05-06T16:44:55.7679655Z 2023-05-06T16:44:55.7679777Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T16:47:28.8380711Z cuda train phlippe_densenet 0.956x 2023-05-06T16:48:05.9599478Z cuda train phlippe_resnet 0.974x 2023-05-06T16:48:07.1891941Z abs_latency gmean=0.00x mean=43.182x 2023-05-06T16:48:07.1892471Z compilation_latency mean=67.776 seconds 2023-05-06T16:48:07.1893286Z compression_ratio mean=0.834x 2023-05-06T16:48:07.1894450Z eager_peak_mem gmean=0.00x mean=4.518x 2023-05-06T16:48:07.1897607Z dynamo_peak_mem gmean=0.00x mean=4.026x 2023-05-06T16:48:07.1900753Z calls_captured gmean=0.00x mean=585.955x 2023-05-06T16:48:07.1903346Z unique_graphs gmean=0.00x mean=6.864x 2023-05-06T16:48:07.1906340Z graph_breaks gmean=0.00x mean=18.273x 2023-05-06T16:48:07.1908673Z unique_graph_breaks gmean=0.00x mean=7.045x 2023-05-06T16:48:07.7281252Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --training --amp --backend inductor --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_with_cudagraphs_torchbench_amp_training_cuda_performance.csv 2023-05-06T16:48:26.8738838Z cuda train functorch_maml_omniglot 1.852x 2023-05-06T16:49:16.2905627Z cuda train hf_Albert 2.371x 2023-05-06T16:49:48.4015348Z cuda train hf_Bart [2023-05-06 16:49:48,399] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:50:29.3041170Z 1.760x 2023-05-06T16:50:55.7520412Z cuda train hf_Bert [2023-05-06 16:50:55,749] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:51:24.7751850Z 2.078x 2023-05-06T16:52:11.0470182Z cuda train hf_Bert_large [2023-05-06 16:52:11,044] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:02.1281587Z 1.874x 2023-05-06T16:53:16.1265369Z cuda train hf_BigBird [2023-05-06 16:53:16,124] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:36.6262726Z [2023-05-06 16:53:36,624] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:37.7720732Z [2023-05-06 16:53:37,771] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:44.8326475Z [2023-05-06 16:53:44,831] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:45.3949052Z [2023-05-06 16:53:45,394] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:52.0060609Z [2023-05-06 16:53:52,004] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:52.5714175Z [2023-05-06 16:53:52,570] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:59.2623353Z [2023-05-06 16:53:59,261] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:53:59.8500099Z [2023-05-06 16:53:59,849] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:06.4976628Z [2023-05-06 16:54:06,496] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:07.0687807Z [2023-05-06 16:54:07,068] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:13.7365897Z [2023-05-06 16:54:13,735] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:14.6146368Z [2023-05-06 16:54:14,613] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:21.2953934Z [2023-05-06 16:54:21,294] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:21.8718797Z [2023-05-06 16:54:21,871] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:28.5341433Z [2023-05-06 16:54:28,533] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:29.1045830Z [2023-05-06 16:54:29,103] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:35.7921518Z [2023-05-06 16:54:35,791] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:36.3659006Z [2023-05-06 16:54:36,365] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:43.0331429Z [2023-05-06 16:54:43,032] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:43.5983211Z [2023-05-06 16:54:43,597] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:50.2792176Z [2023-05-06 16:54:50,278] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:50.8495700Z [2023-05-06 16:54:50,848] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:57.2328705Z [2023-05-06 16:54:57,231] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:54:58.1665977Z [2023-05-06 16:54:58,165] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:56:45.0644285Z 2.549x 2023-05-06T16:57:02.5300214Z cuda train hf_DistilBert [2023-05-06 16:57:02,528] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:57:26.8951446Z 1.565x 2023-05-06T16:57:51.8078849Z cuda train hf_GPT2 [2023-05-06 16:57:51,805] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T16:58:28.7931086Z 2.028x 2023-05-06T16:59:38.6181721Z cuda train hf_GPT2_large [2023-05-06 16:59:38,616] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:01:24.3894168Z 1.774x 2023-05-06T17:01:38.9511602Z cuda train hf_Longformer [2023-05-06 17:01:38,949] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:05.2147072Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T17:02:05.2147668Z Traceback (most recent call last): 2023-05-06T17:02:05.2149917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T17:02:05.2150661Z fn(model, example_inputs) 2023-05-06T17:02:05.2152949Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:02:05.2153498Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2154083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T17:02:05.2154739Z cloned_inputs = clone_inputs(inputs) 2023-05-06T17:02:05.2155524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T17:02:05.2156119Z self.optimizer_zero_grad(mod) 2023-05-06T17:02:05.2157018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T17:02:05.2157745Z pred = mod(*cloned_inputs) 2023-05-06T17:02:05.2158743Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:02:05.2159327Z return self._call_impl(*args, **kwargs) 2023-05-06T17:02:05.2160275Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:02:05.2160916Z return forward_call(*args, **kwargs) 2023-05-06T17:02:05.2161750Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T17:02:05.2162142Z outputs = self.longformer( 2023-05-06T17:02:05.2162655Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:02:05.2163037Z return self._call_impl(*args, **kwargs) 2023-05-06T17:02:05.2163522Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:02:05.2164502Z return forward_call(*args, **kwargs) 2023-05-06T17:02:05.2165065Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T17:02:05.2165442Z encoder_outputs = self.encoder( 2023-05-06T17:02:05.2165979Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:02:05.2166346Z return self._call_impl(*args, **kwargs) 2023-05-06T17:02:05.2166841Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:02:05.2167176Z return forward_call(*args, **kwargs) 2023-05-06T17:02:05.2167891Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T17:02:05.2168324Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T17:02:05.2168852Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T17:02:05.2169242Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T17:02:05.2169777Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T17:02:05.2170176Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T17:02:05.2170677Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T17:02:05.2171011Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2171586Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T17:02:05.2171928Z return _compile( 2023-05-06T17:02:05.2172392Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:02:05.2172730Z r = func(*args, **kwargs) 2023-05-06T17:02:05.2173214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T17:02:05.2173578Z out_code = transform_code_object(code, transform) 2023-05-06T17:02:05.2174153Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T17:02:05.2174563Z transformations(instructions, code_options) 2023-05-06T17:02:05.2175068Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T17:02:05.2175396Z tracer.run() 2023-05-06T17:02:05.2175890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T17:02:05.2176213Z super().run() 2023-05-06T17:02:05.2176667Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T17:02:05.2176999Z and self.step() 2023-05-06T17:02:05.2177467Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T17:02:05.2177805Z getattr(self, inst.opname)(inst) 2023-05-06T17:02:05.2178318Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T17:02:05.2178683Z self.output.compile_subgraph( 2023-05-06T17:02:05.2179197Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T17:02:05.2179605Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T17:02:05.2179978Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T17:02:05.2180282Z return func(*args, **kwds) 2023-05-06T17:02:05.2180790Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T17:02:05.2181352Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T17:02:05.2181857Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:02:05.2182189Z r = func(*args, **kwargs) 2023-05-06T17:02:05.2182747Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T17:02:05.2183291Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T17:02:05.2183870Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T17:02:05.2184408Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T17:02:05.2184965Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T17:02:05.2185354Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T17:02:05.2185877Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T17:02:05.2186229Z return compile_fx(*args, **kwargs) 2023-05-06T17:02:05.2187126Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T17:02:05.2187805Z return aot_autograd( 2023-05-06T17:02:05.2188622Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T17:02:05.2189203Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T17:02:05.2190045Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T17:02:05.2190602Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T17:02:05.2191381Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:02:05.2191867Z r = func(*args, **kwargs) 2023-05-06T17:02:05.2192613Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T17:02:05.2193231Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:02:05.2194054Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T17:02:05.2194657Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:02:05.2195503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T17:02:05.2196090Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:02:05.2197112Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T17:02:05.2197648Z fx_g = create_functionalized_graph( 2023-05-06T17:02:05.2198439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T17:02:05.2199057Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T17:02:05.2199859Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T17:02:05.2200513Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T17:02:05.2201358Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:02:05.2201817Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2202485Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T17:02:05.2203289Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2204026Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T17:02:05.2205071Z graph = tracer.trace(root, concrete_args) 2023-05-06T17:02:05.2205795Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:02:05.2206242Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2206895Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T17:02:05.2207354Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2208248Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T17:02:05.2208712Z (self.create_arg(fn(*args)),), 2023-05-06T17:02:05.2209400Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T17:02:05.2209886Z tree_out = root_fn(*tree_args) 2023-05-06T17:02:05.2210574Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T17:02:05.2211034Z out = f(*tensors) 2023-05-06T17:02:05.2211787Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T17:02:05.2212331Z return functionalized_f_helper(primals, tangents) 2023-05-06T17:02:05.2213110Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T17:02:05.2213599Z f_outs = fn(*f_args) 2023-05-06T17:02:05.2214282Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T17:02:05.2214776Z backward_out = torch.autograd.grad( 2023-05-06T17:02:05.2215464Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T17:02:05.2215923Z return handle_torch_function( 2023-05-06T17:02:05.2216624Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T17:02:05.2217170Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T17:02:05.2217936Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T17:02:05.2218453Z return replace_fn(func)(*args, **kwargs) 2023-05-06T17:02:05.2219104Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T17:02:05.2219720Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T17:02:05.2220498Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T17:02:05.2220997Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2221759Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T17:02:05.2222339Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T17:02:05.2223125Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T17:02:05.2223687Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T17:02:05.2224450Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T17:02:05.2224942Z out = func(*args, **kwargs) 2023-05-06T17:02:05.2225580Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T17:02:05.2226031Z return self._op(*args, **kwargs or {}) 2023-05-06T17:02:05.2226998Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T17:02:05.2227448Z return fn(*args, **kwargs) 2023-05-06T17:02:05.2228141Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T17:02:05.2228688Z return self.dispatch(func, types, args, kwargs) 2023-05-06T17:02:05.2229419Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1314, in dispatch 2023-05-06T17:02:05.2229894Z r = func(*args, **kwargs) 2023-05-06T17:02:05.2230497Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T17:02:05.2230962Z return self._op(*args, **kwargs or {}) 2023-05-06T17:02:05.2231857Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T17:02:05.2232368Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T17:02:05.2233137Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T17:02:05.2233610Z raise ValueError(msg) 2023-05-06T17:02:05.2234200Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T17:02:05.2234821Z ValueError: Cannot view a tensor with shape torch.Size([2, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (24, 4, 256, 513)! 2023-05-06T17:02:05.2235186Z 2023-05-06T17:02:05.2235195Z 2023-05-06T17:02:05.2235423Z You can suppress this exception and fall back to eager by setting: 2023-05-06T17:02:05.2235831Z import torch._dynamo 2023-05-06T17:02:05.2236211Z torch._dynamo.config.suppress_errors = True 2023-05-06T17:02:05.2236471Z 2023-05-06T17:02:06.5440752Z ERROR 2023-05-06T17:02:13.5343460Z cuda train hf_Reformer [2023-05-06 17:02:13,533] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:15.7430834Z [2023-05-06 17:02:15,742] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:23.2491926Z [2023-05-06 17:02:23,247] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:25.0005210Z [2023-05-06 17:02:24,999] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:25.3930439Z [2023-05-06 17:02:25,392] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:26.1433763Z [2023-05-06 17:02:26,142] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:27.5300218Z [2023-05-06 17:02:27,529] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:31.7324919Z [2023-05-06 17:02:31,731] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:32.8428120Z [2023-05-06 17:02:32,842] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:33.3214908Z [2023-05-06 17:02:33,320] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:34.4690752Z [2023-05-06 17:02:34,468] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:34.8542021Z [2023-05-06 17:02:34,853] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:34.9463214Z [2023-05-06 17:02:34,945] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:35.2865964Z [2023-05-06 17:02:35,285] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:36.0907385Z [2023-05-06 17:02:36,089] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:36.9323340Z [2023-05-06 17:02:36,931] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:37.0364091Z [2023-05-06 17:02:37,035] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:37.3407762Z [2023-05-06 17:02:37,339] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:38.2458337Z [2023-05-06 17:02:38,245] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:38.4395650Z [2023-05-06 17:02:38,438] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:38.5247127Z [2023-05-06 17:02:38,524] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:38.8197453Z [2023-05-06 17:02:38,818] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:39.6212419Z [2023-05-06 17:02:39,620] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:40.4665661Z [2023-05-06 17:02:40,465] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:02:40.5558278Z [2023-05-06 17:02:40,555] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:41.0982293Z [2023-05-06 17:02:41,097] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:02:41.3396023Z [2023-05-06 17:02:41,338] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:03:00.4758933Z 1.151x 2023-05-06T17:03:29.7736725Z cuda train hf_T5 [2023-05-06 17:03:29,771] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:04:19.8397570Z 1.782x 2023-05-06T17:04:29.9711478Z Eager model failed to run 2023-05-06T17:04:29.9714621Z Traceback (most recent call last): 2023-05-06T17:04:29.9715018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T17:04:29.9715402Z self.model_iter_fn(model, example_inputs) 2023-05-06T17:04:29.9715783Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T17:04:29.9716259Z self.grad_scaler.scale(loss).backward() 2023-05-06T17:04:29.9717296Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T17:04:29.9717631Z torch.autograd.backward( 2023-05-06T17:04:29.9718510Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T17:04:29.9718952Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T17:04:29.9719846Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 384.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 235.06 MiB is free. Process 974478 has 39.16 GiB memory in use. Of the allocated memory 37.56 GiB is allocated by PyTorch, and 1.09 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T17:04:29.9720704Z 2023-05-06T17:04:29.9725039Z The above exception was the direct cause of the following exception: 2023-05-06T17:04:29.9725708Z 2023-05-06T17:04:29.9725913Z Traceback (most recent call last): 2023-05-06T17:04:29.9726349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:04:29.9726737Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:04:29.9727108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T17:04:29.9727604Z self.validate_model(model, example_inputs) 2023-05-06T17:04:29.9728534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T17:04:29.9729291Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T17:04:29.9730194Z NotImplementedError: Eager model failed to run 2023-05-06T17:04:29.9730383Z 2023-05-06T17:04:29.9730498Z WARNING:root:hf_T5_base failed to load 2023-05-06T17:06:22.2363126Z cuda train hf_T5_large [2023-05-06 17:06:22,233] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:08:59.0548271Z 2.414x 2023-05-06T17:09:13.9171832Z cuda train lennard_jones 1.465x 2023-05-06T17:09:17.6268977Z Test train is not implemented. 2023-05-06T17:09:17.6269320Z Traceback (most recent call last): 2023-05-06T17:09:17.6269813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:09:17.6271184Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:09:17.6271601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 300, in load_model 2023-05-06T17:09:17.6271912Z benchmark = benchmark_cls( 2023-05-06T17:09:17.6274931Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T17:09:17.6275693Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T17:09:17.6276189Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/llama/__init__.py", line 17, in __init__ 2023-05-06T17:09:17.6277078Z super().__init__(test=test, device=device, jit=jit, batch_size=batch_size, extra_args=extra_args) 2023-05-06T17:09:17.6277839Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 85, in __init__ 2023-05-06T17:09:17.6278200Z self.determine_batch_size(batch_size) 2023-05-06T17:09:17.6278576Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 218, in determine_batch_size 2023-05-06T17:09:17.6279018Z raise NotImplementedError(f"Test {self.test} is not implemented.") 2023-05-06T17:09:17.6279374Z NotImplementedError: Test train is not implemented. 2023-05-06T17:09:17.6279579Z 2023-05-06T17:09:17.6279689Z WARNING:root:llama failed to load 2023-05-06T17:09:34.5160409Z cuda train maml_omniglot 1.558x 2023-05-06T17:09:58.2596049Z cuda train mnasnet1_0 [2023-05-06 17:09:58,257] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:11:08.4802508Z 1.647x 2023-05-06T17:11:34.7546389Z cuda train mobilenet_v2 [2023-05-06 17:11:34,752] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:12:59.9844480Z 1.594x 2023-05-06T17:13:06.9908403Z Eager model failed to run 2023-05-06T17:13:06.9908888Z Traceback (most recent call last): 2023-05-06T17:13:06.9909687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T17:13:06.9910609Z self.model_iter_fn(model, example_inputs) 2023-05-06T17:13:06.9911209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T17:13:06.9912690Z pred = mod(*cloned_inputs) 2023-05-06T17:13:06.9914668Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T17:13:06.9915255Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T17:13:06.9916042Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T17:13:06.9916530Z raise e 2023-05-06T17:13:06.9917476Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T17:13:06.9918124Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T17:13:06.9918976Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:13:06.9919491Z return self._call_impl(*args, **kwargs) 2023-05-06T17:13:06.9920417Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:13:06.9921990Z return forward_call(*args, **kwargs) 2023-05-06T17:13:06.9922445Z File ".3", line 207, in forward 2023-05-06T17:13:06.9923073Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T17:13:06.9924258Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:13:06.9924935Z return self._call_impl(*args, **kwargs) 2023-05-06T17:13:06.9925825Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:13:06.9926475Z return forward_call(*args, **kwargs) 2023-05-06T17:13:06.9927751Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T17:13:06.9928455Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T17:13:06.9929018Z RuntimeError: expected scalar type Float but found Half 2023-05-06T17:13:06.9929361Z 2023-05-06T17:13:06.9929666Z The above exception was the direct cause of the following exception: 2023-05-06T17:13:06.9930038Z 2023-05-06T17:13:06.9930320Z Traceback (most recent call last): 2023-05-06T17:13:06.9930857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:13:06.9931505Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:13:06.9932182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T17:13:06.9932806Z self.validate_model(model, example_inputs) 2023-05-06T17:13:06.9933428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T17:13:06.9934128Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T17:13:06.9934725Z NotImplementedError: Eager model failed to run 2023-05-06T17:13:06.9935041Z 2023-05-06T17:13:06.9935280Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T17:13:32.2018064Z cuda train mobilenet_v3_large [2023-05-06 17:13:32,200] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:15:00.4439024Z 1.956x 2023-05-06T17:15:09.1310637Z cuda train moco [2023-05-06 17:15:09,128] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T17:16:14.0059877Z [2023-05-06 17:16:14,003] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T17:16:14.0060600Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T17:16:14.0061582Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T17:16:14.5513388Z [2023-05-06 17:16:14,550] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T17:16:14.5549545Z [2023-05-06 17:16:14,554] torch._inductor.utils: [WARNING] skipping cudagraphs due to multiple devices 2023-05-06T17:16:29.5980622Z [2023-05-06 17:16:29,596] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T17:16:47.1149677Z [2023-05-06 17:16:47,113] torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation 2023-05-06T17:16:49.1251492Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T17:16:49.1253760Z Traceback (most recent call last): 2023-05-06T17:16:49.1254374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T17:16:49.1254876Z fn(model, example_inputs) 2023-05-06T17:16:49.1256420Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:16:49.1256801Z return fn(*args, **kwargs) 2023-05-06T17:16:49.1263045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T17:16:49.1263698Z cloned_inputs = clone_inputs(inputs) 2023-05-06T17:16:49.1264363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T17:16:49.1264928Z self.optimizer_zero_grad(mod) 2023-05-06T17:16:49.1265549Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T17:16:49.1266076Z pred = mod(*cloned_inputs) 2023-05-06T17:16:49.1266686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 399, in 2023-05-06T17:16:49.1267312Z loss = self.compute_loss(pred) 2023-05-06T17:16:49.1268416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in 2023-05-06T17:16:49.1269129Z self.grad_scaler.scale(loss).backward() 2023-05-06T17:16:49.1270169Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T17:16:49.1270876Z torch.autograd.backward( 2023-05-06T17:16:49.1271742Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T17:16:49.1272493Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T17:16:49.1273217Z RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 2023-05-06T17:16:52.7032417Z ERROR 2023-05-06T17:17:03.4129767Z cuda train nvidia_deeprecommender [2023-05-06 17:17:03,411] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:17:09.1114289Z 0.978x 2023-05-06T17:17:13.8623739Z Eager model failed to run 2023-05-06T17:17:13.8634040Z Traceback (most recent call last): 2023-05-06T17:17:13.8634921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T17:17:13.8639150Z self.model_iter_fn(model, example_inputs) 2023-05-06T17:17:13.8640294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T17:17:13.8640780Z pred = mod(*cloned_inputs) 2023-05-06T17:17:13.8642192Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:17:13.8642737Z return self._call_impl(*args, **kwargs) 2023-05-06T17:17:13.8644116Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:17:13.8644774Z return forward_call(*args, **kwargs) 2023-05-06T17:17:13.8645536Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 148, in forward 2023-05-06T17:17:13.8646486Z return self._module(*args, **kwargs) 2023-05-06T17:17:13.8647105Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:17:13.8647479Z return self._call_impl(*args, **kwargs) 2023-05-06T17:17:13.8647982Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:17:13.8648332Z return forward_call(*args, **kwargs) 2023-05-06T17:17:13.8648835Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 285, in forward 2023-05-06T17:17:13.8649174Z return self._forward_impl(x) 2023-05-06T17:17:13.8649678Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 268, in _forward_impl 2023-05-06T17:17:13.8650017Z x = self.conv1(x) 2023-05-06T17:17:13.8650506Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:17:13.8650876Z return self._call_impl(*args, **kwargs) 2023-05-06T17:17:13.8651854Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1557, in _call_impl 2023-05-06T17:17:13.8652209Z hook_result = hook(self, args, result) 2023-05-06T17:17:13.8652786Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 288, in capture_activations_hook 2023-05-06T17:17:13.8653163Z p._forward_counter += 1 2023-05-06T17:17:13.8653573Z AttributeError: 'Parameter' object has no attribute '_forward_counter' 2023-05-06T17:17:13.8653787Z 2023-05-06T17:17:13.8653944Z The above exception was the direct cause of the following exception: 2023-05-06T17:17:13.8654149Z 2023-05-06T17:17:13.8654262Z Traceback (most recent call last): 2023-05-06T17:17:13.8654763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:17:13.8655139Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:17:13.8655513Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T17:17:13.8655876Z self.validate_model(model, example_inputs) 2023-05-06T17:17:13.8656240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T17:17:13.8656680Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T17:17:13.8657015Z NotImplementedError: Eager model failed to run 2023-05-06T17:17:13.8657197Z 2023-05-06T17:17:13.8657325Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T17:19:18.7064951Z cuda train phlippe_densenet 1.684x 2023-05-06T17:19:53.0546333Z cuda train phlippe_resnet 1.575x 2023-05-06T17:19:54.3103691Z abs_latency gmean=0.00x mean=34.122x 2023-05-06T17:19:54.3104066Z compilation_latency mean=67.049 seconds 2023-05-06T17:19:54.3104336Z compression_ratio mean=0.835x 2023-05-06T17:19:54.3105790Z eager_peak_mem gmean=0.00x mean=4.521x 2023-05-06T17:19:54.3108201Z dynamo_peak_mem gmean=0.00x mean=4.099x 2023-05-06T17:19:54.3110919Z calls_captured gmean=0.00x mean=585.955x 2023-05-06T17:19:54.3113341Z unique_graphs gmean=0.00x mean=6.864x 2023-05-06T17:19:54.3116282Z graph_breaks gmean=0.00x mean=18.273x 2023-05-06T17:19:54.3118996Z unique_graph_breaks gmean=0.00x mean=7.045x 2023-05-06T17:19:54.8502759Z + python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --training --amp --backend inductor --dynamic-shapes --dynamic-batch-only --disable-cudagraphs --device cuda --total-partitions 3 --partition-id 1 --output /var/lib/jenkins/workspace/test/test-reports/inductor_dynamic_torchbench_amp_training_cuda_performance.csv 2023-05-06T17:20:14.9621616Z cuda train functorch_maml_omniglot 0.903x 2023-05-06T17:21:13.9551959Z cuda train hf_Albert [2023-05-06 17:21:13,952] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15360000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:21:46.2172648Z 2.040x 2023-05-06T17:22:39.0298987Z cuda train hf_Bart [2023-05-06 17:22:39,027] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:23:38.2355884Z 1.398x 2023-05-06T17:24:22.1176074Z cuda train hf_Bert [2023-05-06 17:24:22,115] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:24:45.5568809Z [2023-05-06 17:24:45,554] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:25:04.7529603Z 1.473x 2023-05-06T17:26:25.1482944Z cuda train hf_Bert_large [2023-05-06 17:26:25,144] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:27:06.2148737Z [2023-05-06 17:27:06,212] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:27:33.1076780Z 1.433x 2023-05-06T17:27:47.7693422Z cuda train hf_BigBird [2023-05-06 17:27:47,768] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:27:53.9835713Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T17:27:53.9836034Z Traceback (most recent call last): 2023-05-06T17:27:53.9836458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T17:27:53.9836966Z fn(model, example_inputs) 2023-05-06T17:27:53.9843220Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:27:53.9843829Z return fn(*args, **kwargs) 2023-05-06T17:27:53.9844680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T17:27:53.9845062Z cloned_inputs = clone_inputs(inputs) 2023-05-06T17:27:53.9845443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T17:27:53.9845837Z self.optimizer_zero_grad(mod) 2023-05-06T17:27:53.9846221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T17:27:53.9846578Z pred = mod(*cloned_inputs) 2023-05-06T17:27:53.9847214Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9847601Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9848124Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9848467Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9849079Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2455, in forward 2023-05-06T17:27:53.9849525Z outputs = self.bert( 2023-05-06T17:27:53.9850390Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9851083Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9852036Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9852712Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9853366Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2103, in forward 2023-05-06T17:27:53.9853712Z to_mask = None 2023-05-06T17:27:53.9854203Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9854587Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9855087Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9855433Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9855966Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1632, in forward 2023-05-06T17:27:53.9856338Z layer_outputs = layer_module( 2023-05-06T17:27:53.9856828Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9857197Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9857689Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9858044Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9858572Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1484, in forward 2023-05-06T17:27:53.9858963Z self_attention_outputs = self.attention( 2023-05-06T17:27:53.9859494Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9860135Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9860722Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9861075Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9861619Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1397, in forward 2023-05-06T17:27:53.9861970Z self_outputs = self.self( 2023-05-06T17:27:53.9862496Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:27:53.9862867Z return self._call_impl(*args, **kwargs) 2023-05-06T17:27:53.9863466Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:27:53.9863823Z return forward_call(*args, **kwargs) 2023-05-06T17:27:53.9864373Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 470, in forward 2023-05-06T17:27:53.9864808Z context_layer, attention_probs = self.bigbird_block_sparse_attention( 2023-05-06T17:27:53.9865352Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T17:27:53.9865742Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T17:27:53.9866277Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T17:27:53.9866680Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T17:27:53.9867192Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T17:27:53.9867524Z return fn(*args, **kwargs) 2023-05-06T17:27:53.9868040Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T17:27:53.9868378Z return _compile( 2023-05-06T17:27:53.9868842Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9869176Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9869689Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T17:27:53.9870066Z out_code = transform_code_object(code, transform) 2023-05-06T17:27:53.9870685Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T17:27:53.9871101Z transformations(instructions, code_options) 2023-05-06T17:27:53.9871619Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T17:27:53.9871946Z tracer.run() 2023-05-06T17:27:53.9872416Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T17:27:53.9872728Z super().run() 2023-05-06T17:27:53.9873200Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T17:27:53.9873523Z and self.step() 2023-05-06T17:27:53.9873996Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T17:27:53.9874334Z getattr(self, inst.opname)(inst) 2023-05-06T17:27:53.9874832Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 431, in wrapper 2023-05-06T17:27:53.9875226Z self.output.compile_subgraph(self, reason=reason) 2023-05-06T17:27:53.9875764Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T17:27:53.9876183Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T17:27:53.9876862Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T17:27:53.9877171Z return func(*args, **kwds) 2023-05-06T17:27:53.9877698Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T17:27:53.9878089Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T17:27:53.9878585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9878903Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9879406Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T17:27:53.9879953Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T17:27:53.9880585Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T17:27:53.9880981Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T17:27:53.9881523Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T17:27:53.9881914Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T17:27:53.9882424Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T17:27:53.9882784Z return compile_fx(*args, **kwargs) 2023-05-06T17:27:53.9883287Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T17:27:53.9883623Z return aot_autograd( 2023-05-06T17:27:53.9884106Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T17:27:53.9884506Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T17:27:53.9885071Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T17:27:53.9885453Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T17:27:53.9885958Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9886285Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9886831Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T17:27:53.9887278Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:27:53.9887883Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T17:27:53.9888319Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:27:53.9888919Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T17:27:53.9889350Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:27:53.9889943Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1348, in aot_dispatch_base 2023-05-06T17:27:53.9890347Z compiled_fw = compiler(fw_module, adjusted_flat_args) 2023-05-06T17:27:53.9890881Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9891214Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9891708Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 684, in fw_compiler_base 2023-05-06T17:27:53.9917541Z return inner_compile( 2023-05-06T17:27:53.9918607Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper 2023-05-06T17:27:53.9919532Z inner_compiled_fn = compiler_fn(gm, example_inputs) 2023-05-06T17:27:53.9920283Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/debug.py", line 220, in inner 2023-05-06T17:27:53.9920838Z return fn(*args, **kwargs) 2023-05-06T17:27:53.9921286Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T17:27:53.9921701Z return func(*args, **kwds) 2023-05-06T17:27:53.9922430Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 211, in compile_fx_inner 2023-05-06T17:27:53.9922945Z compiled_fn = graph.compile_to_fn() 2023-05-06T17:27:53.9923641Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 717, in compile_to_fn 2023-05-06T17:27:53.9924318Z return self.compile_to_module().call 2023-05-06T17:27:53.9925011Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9925463Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9926123Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 694, in compile_to_module 2023-05-06T17:27:53.9926590Z code, linemap = self.codegen() 2023-05-06T17:27:53.9927305Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 647, in codegen 2023-05-06T17:27:53.9927795Z return self.wrapper_code.generate() 2023-05-06T17:27:53.9928503Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:27:53.9928982Z r = func(*args, **kwargs) 2023-05-06T17:27:53.9929700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 419, in generate 2023-05-06T17:27:53.9930224Z output_refs = self.get_output_refs() 2023-05-06T17:27:53.9931005Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/utils.py", line 274, in wrapper 2023-05-06T17:27:53.9931501Z setattr(self, key, fn(self)) 2023-05-06T17:27:53.9932252Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in get_output_refs 2023-05-06T17:27:53.9932837Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T17:27:53.9933639Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/wrapper.py", line 283, in 2023-05-06T17:27:53.9934233Z return [x.codegen_reference() for x in V.graph.graph_outputs] 2023-05-06T17:27:53.9934989Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/ir.py", line 2142, in codegen_reference 2023-05-06T17:27:53.9935569Z expr = pexpr(V.graph.sizevars.simplify(self.shape)) 2023-05-06T17:27:53.9936357Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint 2023-05-06T17:27:53.9936870Z return self._str(self._print(expr)) 2023-05-06T17:27:53.9937586Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print 2023-05-06T17:27:53.9938099Z return printmethod(expr, **kwargs) 2023-05-06T17:27:53.9938843Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 191, in _print_Pow 2023-05-06T17:27:53.9939325Z assert exp == int(exp), exp 2023-05-06T17:27:53.9939898Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T17:27:53.9940382Z AssertionError: -1/2 2023-05-06T17:27:53.9940640Z 2023-05-06T17:27:53.9940650Z 2023-05-06T17:27:53.9940883Z You can suppress this exception and fall back to eager by setting: 2023-05-06T17:27:53.9941262Z import torch._dynamo 2023-05-06T17:27:53.9941646Z torch._dynamo.config.suppress_errors = True 2023-05-06T17:27:53.9941889Z 2023-05-06T17:27:55.1526120Z ERROR 2023-05-06T17:28:19.7312281Z cuda train hf_DistilBert [2023-05-06 17:28:19,729] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:28:38.1227892Z [2023-05-06 17:28:38,120] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 15627264*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:28:51.0969789Z 1.476x 2023-05-06T17:29:33.5418885Z cuda train hf_GPT2 [2023-05-06 17:29:33,539] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:30:05.9741331Z [2023-05-06 17:30:05,971] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1179648*s0 - 1536 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:30:24.1105363Z 1.782x 2023-05-06T17:32:25.1915272Z cuda train hf_GPT2_large [2023-05-06 17:32:25,188] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:33:50.6161882Z [2023-05-06 17:33:50,613] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1966080*s0 - 2560 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:34:34.9215403Z 1.694x 2023-05-06T17:34:50.5385667Z cuda train hf_Longformer [2023-05-06 17:34:50,537] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:34:55.5625875Z [2023-05-06 17:34:55,561] torch._dynamo.variables.torch: [WARNING] Calling on only torch.SymInt arguments is not yet supported. 2023-05-06T17:34:55.5627026Z To support this behavior, we need to allow const-propping tensors that store symint data. 2023-05-06T17:34:55.5627597Z For now, dynamo will explicitly graph break when it encounters user code with this behavior. 2023-05-06T17:34:55.5628030Z 2023-05-06T17:35:00.6966692Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T17:35:00.6967269Z Traceback (most recent call last): 2023-05-06T17:35:00.6967894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T17:35:00.6968499Z fn(model, example_inputs) 2023-05-06T17:35:00.6969618Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:35:00.6970238Z return fn(*args, **kwargs) 2023-05-06T17:35:00.6970986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T17:35:00.6971968Z cloned_inputs = clone_inputs(inputs) 2023-05-06T17:35:00.6972627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T17:35:00.6973249Z self.optimizer_zero_grad(mod) 2023-05-06T17:35:00.6973946Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T17:35:00.6974701Z pred = mod(*cloned_inputs) 2023-05-06T17:35:00.6975649Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:35:00.6976280Z return self._call_impl(*args, **kwargs) 2023-05-06T17:35:00.6977034Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:35:00.6977589Z return forward_call(*args, **kwargs) 2023-05-06T17:35:00.6978501Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1848, in forward 2023-05-06T17:35:00.6979084Z outputs = self.longformer( 2023-05-06T17:35:00.6980004Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:35:00.6980764Z return self._call_impl(*args, **kwargs) 2023-05-06T17:35:00.6981688Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:35:00.6982322Z return forward_call(*args, **kwargs) 2023-05-06T17:35:00.6983354Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1750, in forward 2023-05-06T17:35:00.6984564Z encoder_outputs = self.encoder( 2023-05-06T17:35:00.6985444Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:35:00.6986086Z return self._call_impl(*args, **kwargs) 2023-05-06T17:35:00.6986983Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:35:00.6987590Z return forward_call(*args, **kwargs) 2023-05-06T17:35:00.6988590Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1294, in forward 2023-05-06T17:35:00.6989640Z is_global_attn = is_index_global_attn.flatten().any().item() 2023-05-06T17:35:00.6990870Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/transformers/models/longformer/modeling_longformer.py", line 1326, in 2023-05-06T17:35:00.6991543Z layer_outputs = layer_module( 2023-05-06T17:35:00.6992461Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:35:00.6993094Z return self._call_impl(*args, **kwargs) 2023-05-06T17:35:00.6993992Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:35:00.6994591Z return forward_call(*args, **kwargs) 2023-05-06T17:35:00.6995489Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 435, in catch_errors 2023-05-06T17:35:00.6996166Z return callback(frame, cache_size, hooks, frame_state) 2023-05-06T17:35:00.6997366Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T17:35:00.6998076Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T17:35:00.6999001Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T17:35:00.6999586Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7000584Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T17:35:00.7001117Z return _compile( 2023-05-06T17:35:00.7001847Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:35:00.7002329Z r = func(*args, **kwargs) 2023-05-06T17:35:00.7003072Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T17:35:00.7003641Z out_code = transform_code_object(code, transform) 2023-05-06T17:35:00.7004540Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T17:35:00.7005148Z transformations(instructions, code_options) 2023-05-06T17:35:00.7005933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T17:35:00.7006469Z tracer.run() 2023-05-06T17:35:00.7006954Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T17:35:00.7007278Z super().run() 2023-05-06T17:35:00.7007743Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T17:35:00.7008071Z and self.step() 2023-05-06T17:35:00.7008525Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T17:35:00.7008877Z getattr(self, inst.opname)(inst) 2023-05-06T17:35:00.7009406Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T17:35:00.7010088Z self.output.compile_subgraph( 2023-05-06T17:35:00.7010652Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T17:35:00.7011076Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T17:35:00.7011432Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T17:35:00.7011732Z return func(*args, **kwds) 2023-05-06T17:35:00.7012251Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T17:35:00.7012637Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T17:35:00.7013123Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:35:00.7013574Z r = func(*args, **kwargs) 2023-05-06T17:35:00.7014087Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T17:35:00.7014510Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T17:35:00.7015077Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T17:35:00.7015473Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T17:35:00.7016010Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T17:35:00.7016381Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T17:35:00.7016897Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T17:35:00.7017254Z return compile_fx(*args, **kwargs) 2023-05-06T17:35:00.7017746Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T17:35:00.7018081Z return aot_autograd( 2023-05-06T17:35:00.7018580Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T17:35:00.7018975Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T17:35:00.7019522Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T17:35:00.7019948Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T17:35:00.7020486Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T17:35:00.7020815Z r = func(*args, **kwargs) 2023-05-06T17:35:00.7021331Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2975, in create_aot_dispatcher_function 2023-05-06T17:35:00.7021796Z compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:35:00.7022385Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1911, in aot_wrapper_dedupe 2023-05-06T17:35:00.7022813Z return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:35:00.7023407Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2082, in aot_wrapper_synthetic_base 2023-05-06T17:35:00.7023930Z return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata) 2023-05-06T17:35:00.7024985Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2456, in aot_dispatch_autograd 2023-05-06T17:35:00.7025648Z fx_g = create_functionalized_graph( 2023-05-06T17:35:00.7026515Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1198, in create_functionalized_graph 2023-05-06T17:35:00.7027152Z fx_g = make_fx(helper, decomposition_table=aot_config.decompositions)(*args) 2023-05-06T17:35:00.7028264Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 778, in wrapped 2023-05-06T17:35:00.7028921Z t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_autograd), tracer=fx_tracer, concrete_args=tuple(phs)) 2023-05-06T17:35:00.7029721Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:35:00.7030179Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7030890Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T17:35:00.7031354Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7032067Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 474, in dispatch_trace 2023-05-06T17:35:00.7032775Z graph = tracer.trace(root, concrete_args) 2023-05-06T17:35:00.7033450Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T17:35:00.7033909Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7034577Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner 2023-05-06T17:35:00.7035018Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7035679Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 778, in trace 2023-05-06T17:35:00.7036145Z (self.create_arg(fn(*args)),), 2023-05-06T17:35:00.7037010Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/_symbolic_trace.py", line 652, in flatten_fn 2023-05-06T17:35:00.7037482Z tree_out = root_fn(*tree_args) 2023-05-06T17:35:00.7038205Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 491, in wrapped 2023-05-06T17:35:00.7038677Z out = f(*tensors) 2023-05-06T17:35:00.7039342Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1187, in joint_helper 2023-05-06T17:35:00.7039892Z return functionalized_f_helper(primals, tangents) 2023-05-06T17:35:00.7040739Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1140, in functionalized_f_helper 2023-05-06T17:35:00.7041259Z f_outs = fn(*f_args) 2023-05-06T17:35:00.7041899Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1103, in inner_fn 2023-05-06T17:35:00.7042393Z backward_out = torch.autograd.grad( 2023-05-06T17:35:00.7043061Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 284, in grad 2023-05-06T17:35:00.7043492Z return handle_torch_function( 2023-05-06T17:35:00.7044176Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/overrides.py", line 1539, in handle_torch_function 2023-05-06T17:35:00.7044726Z result = mode.__torch_function__(public_api, types, args, kwargs) 2023-05-06T17:35:00.7045484Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T17:35:00.7046018Z return replace_fn(func)(*args, **kwargs) 2023-05-06T17:35:00.7046689Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 319, in grad 2023-05-06T17:35:00.7047300Z result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T17:35:00.7048056Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T17:35:00.7048525Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7049245Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 540, in __torch_dispatch__ 2023-05-06T17:35:00.7049835Z return self.inner_torch_dispatch(func, types, args, kwargs) 2023-05-06T17:35:00.7050672Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 565, in inner_torch_dispatch 2023-05-06T17:35:00.7051550Z return proxy_call(self, func, self.pre_autograd, args, kwargs) 2023-05-06T17:35:00.7052337Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 371, in proxy_call 2023-05-06T17:35:00.7052807Z out = func(*args, **kwargs) 2023-05-06T17:35:00.7053426Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 398, in __call__ 2023-05-06T17:35:00.7053889Z return self._op(*args, **kwargs or {}) 2023-05-06T17:35:00.7054549Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_stats.py", line 20, in wrapper 2023-05-06T17:35:00.7054979Z return fn(*args, **kwargs) 2023-05-06T17:35:00.7055880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1105, in __torch_dispatch__ 2023-05-06T17:35:00.7056426Z return self.dispatch(func, types, args, kwargs) 2023-05-06T17:35:00.7057136Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1269, in dispatch 2023-05-06T17:35:00.7057671Z return decomposition_table[func](*args, **kwargs) 2023-05-06T17:35:00.7058350Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4050, in view 2023-05-06T17:35:00.7058872Z return _reshape_view_helper(a, *shape, allow_copy=False) 2023-05-06T17:35:00.7059605Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_refs/__init__.py", line 3261, in _reshape_view_helper 2023-05-06T17:35:00.7060074Z raise ValueError(msg) 2023-05-06T17:35:00.7060694Z torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised: 2023-05-06T17:35:00.7061290Z ValueError: Cannot view a tensor with shape torch.Size([2, 12, 1024, 513]) and strides (6303744, 513, 6156, 1) as a tensor with shape (24, 4, 256, 513)! 2023-05-06T17:35:00.7061631Z 2023-05-06T17:35:00.7061638Z 2023-05-06T17:35:00.7061861Z You can suppress this exception and fall back to eager by setting: 2023-05-06T17:35:00.7062254Z import torch._dynamo 2023-05-06T17:35:00.7062637Z torch._dynamo.config.suppress_errors = True 2023-05-06T17:35:00.7062858Z 2023-05-06T17:35:01.9882434Z ERROR 2023-05-06T17:35:13.6714541Z cuda train hf_Reformer [2023-05-06 17:35:13,670] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:23.6693173Z [2023-05-06 17:35:23,667] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:25.9165993Z [2023-05-06 17:35:25,915] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:26.9043358Z [2023-05-06 17:35:26,903] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:28.6864861Z [2023-05-06 17:35:28,685] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:34.1439874Z [2023-05-06 17:35:34,143] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:34.6472119Z [2023-05-06 17:35:34,646] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:36.1594704Z [2023-05-06 17:35:36,158] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:36.4973296Z [2023-05-06 17:35:36,496] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:36.8593381Z [2023-05-06 17:35:36,858] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:37.7457588Z [2023-05-06 17:35:37,745] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:38.7931183Z [2023-05-06 17:35:38,792] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:39.3431997Z [2023-05-06 17:35:39,342] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:40.5865746Z [2023-05-06 17:35:40,585] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:40.9149548Z [2023-05-06 17:35:40,914] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:41.2392172Z [2023-05-06 17:35:41,238] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:42.1427556Z [2023-05-06 17:35:42,141] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:43.1727928Z [2023-05-06 17:35:43,172] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:43.4918458Z [2023-05-06 17:35:43,491] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:43.7635184Z [2023-05-06 17:35:43,762] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:35:45.8831388Z [2023-05-06 17:35:45,881] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1048576*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:45.8882950Z [2023-05-06 17:35:45,887] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 2097152*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:46.1430066Z [2023-05-06 17:35:46,142] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 786432*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:46.1438718Z [2023-05-06 17:35:46,143] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1048576*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:46.1478593Z [2023-05-06 17:35:46,147] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 64*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:46.1667858Z [2023-05-06 17:35:46,166] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 262144*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:46.1828159Z [2023-05-06 17:35:46,182] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 4096*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:35:58.1673948Z 1.085x 2023-05-06T17:36:52.8830468Z cuda train hf_T5 [2023-05-06 17:36:52,880] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:37:58.3008976Z 1.790x 2023-05-06T17:38:08.9095403Z Eager model failed to run 2023-05-06T17:38:08.9100198Z Traceback (most recent call last): 2023-05-06T17:38:08.9100603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T17:38:08.9101012Z self.model_iter_fn(model, example_inputs) 2023-05-06T17:38:08.9101615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 400, in forward_and_backward_pass 2023-05-06T17:38:08.9104573Z self.grad_scaler.scale(loss).backward() 2023-05-06T17:38:08.9105910Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward 2023-05-06T17:38:08.9106271Z torch.autograd.backward( 2023-05-06T17:38:08.9106781Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 204, in backward 2023-05-06T17:38:08.9107282Z Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 2023-05-06T17:38:08.9108283Z torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 384.00 MiB. GPU 0 has a total capacty of 39.39 GiB of which 235.06 MiB is free. Process 1007660 has 39.16 GiB memory in use. Of the allocated memory 37.56 GiB is allocated by PyTorch, and 1.09 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 2023-05-06T17:38:08.9109025Z 2023-05-06T17:38:08.9109591Z The above exception was the direct cause of the following exception: 2023-05-06T17:38:08.9109802Z 2023-05-06T17:38:08.9109917Z Traceback (most recent call last): 2023-05-06T17:38:08.9110253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:38:08.9110618Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:38:08.9110986Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T17:38:08.9111339Z self.validate_model(model, example_inputs) 2023-05-06T17:38:08.9111694Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T17:38:08.9112062Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T17:38:08.9112560Z NotImplementedError: Eager model failed to run 2023-05-06T17:38:08.9112746Z 2023-05-06T17:38:08.9112861Z WARNING:root:hf_T5_base failed to load 2023-05-06T17:41:40.0685707Z cuda train hf_T5_large [2023-05-06 17:41:40,065] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:44:57.1510083Z 1.895x 2023-05-06T17:45:16.0178251Z cuda train lennard_jones 0.859x 2023-05-06T17:45:19.7711490Z Test train is not implemented. 2023-05-06T17:45:19.7714267Z Traceback (most recent call last): 2023-05-06T17:45:19.7714762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:45:19.7715140Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:45:19.7718804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 300, in load_model 2023-05-06T17:45:19.7719223Z benchmark = benchmark_cls( 2023-05-06T17:45:19.7720225Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 21, in __call__ 2023-05-06T17:45:19.7720601Z obj = type.__call__(cls, *args, **kwargs) 2023-05-06T17:45:19.7721008Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/llama/__init__.py", line 17, in __init__ 2023-05-06T17:45:19.7722939Z super().__init__(test=test, device=device, jit=jit, batch_size=batch_size, extra_args=extra_args) 2023-05-06T17:45:19.7723561Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 85, in __init__ 2023-05-06T17:45:19.7723938Z self.determine_batch_size(batch_size) 2023-05-06T17:45:19.7724327Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/util/model.py", line 218, in determine_batch_size 2023-05-06T17:45:19.7724737Z raise NotImplementedError(f"Test {self.test} is not implemented.") 2023-05-06T17:45:19.7725145Z NotImplementedError: Test train is not implemented. 2023-05-06T17:45:19.7725339Z 2023-05-06T17:45:19.7725450Z WARNING:root:llama failed to load 2023-05-06T17:45:38.0924044Z cuda train maml_omniglot [2023-05-06 17:45:38,090] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 5*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:45:49.8479864Z 0.853x 2023-05-06T17:46:19.1403617Z cuda train mnasnet1_0 [2023-05-06 17:46:19,138] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:46:54.9865890Z [2023-05-06 17:46:54,983] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:47:33.8837493Z 0.990x 2023-05-06T17:48:08.3895446Z cuda train mobilenet_v2 [2023-05-06 17:48:08,387] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:48:40.5328440Z [2023-05-06 17:48:40,530] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:49:18.1285838Z 1.227x 2023-05-06T17:49:25.1509011Z Eager model failed to run 2023-05-06T17:49:25.1509330Z Traceback (most recent call last): 2023-05-06T17:49:25.1511716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T17:49:25.1512568Z self.model_iter_fn(model, example_inputs) 2023-05-06T17:49:25.1513233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T17:49:25.1513586Z pred = mod(*cloned_inputs) 2023-05-06T17:49:25.1514581Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 662, in call_wrapped 2023-05-06T17:49:25.1514971Z return self._wrapped_call(self, *args, **kwargs) 2023-05-06T17:49:25.1515487Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 281, in __call__ 2023-05-06T17:49:25.1515801Z raise e 2023-05-06T17:49:25.1516824Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 271, in __call__ 2023-05-06T17:49:25.1517368Z return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] 2023-05-06T17:49:25.1518012Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:49:25.1518371Z return self._call_impl(*args, **kwargs) 2023-05-06T17:49:25.1518876Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:49:25.1519234Z return forward_call(*args, **kwargs) 2023-05-06T17:49:25.1519516Z File ".3", line 207, in forward 2023-05-06T17:49:25.1519870Z activation_post_process_101 = self.activation_post_process_101(classifier_1); classifier_1 = None 2023-05-06T17:49:25.1520550Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T17:49:25.1520929Z return self._call_impl(*args, **kwargs) 2023-05-06T17:49:25.1521416Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T17:49:25.1521778Z return forward_call(*args, **kwargs) 2023-05-06T17:49:25.1522294Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/ao/quantization/fake_quantize.py", line 342, in forward 2023-05-06T17:49:25.1522673Z return torch.fused_moving_avg_obs_fake_quant( 2023-05-06T17:49:25.1522982Z RuntimeError: expected scalar type Float but found Half 2023-05-06T17:49:25.1523170Z 2023-05-06T17:49:25.1523343Z The above exception was the direct cause of the following exception: 2023-05-06T17:49:25.1523550Z 2023-05-06T17:49:25.1523663Z Traceback (most recent call last): 2023-05-06T17:49:25.1523976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T17:49:25.1524339Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T17:49:25.1524726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T17:49:25.1525075Z self.validate_model(model, example_inputs) 2023-05-06T17:49:25.1525424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T17:49:25.1525802Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T17:49:25.1526135Z NotImplementedError: Eager model failed to run 2023-05-06T17:49:25.1526315Z 2023-05-06T17:49:25.1526446Z WARNING:root:mobilenet_v2_quantized_qat failed to load 2023-05-06T17:49:59.1052714Z cuda train mobilenet_v3_large [2023-05-06 17:49:59,103] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T17:50:39.2695466Z [2023-05-06 17:50:39,267] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 1000*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T17:51:43.0850290Z 1.014x 2023-05-06T17:51:51.9643473Z cuda train moco [2023-05-06 17:51:51,963] torch._dynamo.variables.torch: [WARNING] Profiler will be ignored 2023-05-06T18:10:24.6546598Z [2023-05-06 18:10:24,652] torch._dynamo.convert_frame: [WARNING] torch._dynamo hit config.cache_size_limit (64) 2023-05-06T18:10:24.6547878Z function: '' (/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py:50) 2023-05-06T18:10:24.6548854Z to diagnose recompilation issues, set env variable TORCHDYNAMO_REPORT_GUARD_FAILURES=1 and also see https://pytorch.org/docs/master/compile/troubleshooting.html. 2023-05-06T18:10:25.3710778Z [2023-05-06 18:10:25,370] torch._inductor.utils: [WARNING] DeviceCopy in input program 2023-05-06T18:11:02.1801737Z ERROR:common:Backend dynamo failed in warmup() 2023-05-06T18:11:02.1802268Z Traceback (most recent call last): 2023-05-06T18:11:02.1802621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1511, in warmup 2023-05-06T18:11:02.1803597Z fn(model, example_inputs) 2023-05-06T18:11:02.1805497Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 282, in _fn 2023-05-06T18:11:02.1806212Z return fn(*args, **kwargs) 2023-05-06T18:11:02.1806815Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 395, in forward_and_backward_pass 2023-05-06T18:11:02.1807414Z cloned_inputs = clone_inputs(inputs) 2023-05-06T18:11:02.1808083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 396, in 2023-05-06T18:11:02.1808762Z self.optimizer_zero_grad(mod) 2023-05-06T18:11:02.1809345Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in 2023-05-06T18:11:02.1809979Z pred = mod(*cloned_inputs) 2023-05-06T18:11:02.1810993Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T18:11:02.1811712Z return self._call_impl(*args, **kwargs) 2023-05-06T18:11:02.1812601Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T18:11:02.1813236Z return forward_call(*args, **kwargs) 2023-05-06T18:11:02.1814142Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1536, in forward 2023-05-06T18:11:02.1814780Z else self._run_ddp_forward(*inputs, **kwargs) 2023-05-06T18:11:02.1815700Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1373, in _run_ddp_forward 2023-05-06T18:11:02.1816454Z return self.module(*inputs, **kwargs) # type: ignore[index] 2023-05-06T18:11:02.1817396Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T18:11:02.1818072Z return self._call_impl(*args, **kwargs) 2023-05-06T18:11:02.1818957Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T18:11:02.1819540Z return forward_call(*args, **kwargs) 2023-05-06T18:11:02.1820156Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 130, in forward 2023-05-06T18:11:02.1820833Z self._momentum_update_key_encoder() # update the key encoder 2023-05-06T18:11:02.1821592Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 133, in 2023-05-06T18:11:02.1822227Z im_k, idx_unshuffle = self._batch_shuffle_ddp(im_k) 2023-05-06T18:11:02.1822893Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 139, in 2023-05-06T18:11:02.1823557Z k = self._batch_unshuffle_ddp(k, idx_unshuffle) 2023-05-06T18:11:02.1824289Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 158, in 2023-05-06T18:11:02.1824858Z self._dequeue_and_enqueue(k) 2023-05-06T18:11:02.1825707Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context 2023-05-06T18:11:02.1826717Z return func(*args, **kwargs) 2023-05-06T18:11:02.1827340Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 55, in _dequeue_and_enqueue 2023-05-06T18:11:02.1828001Z keys = concat_all_gather(keys) 2023-05-06T18:11:02.1828647Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 59, in 2023-05-06T18:11:02.1829259Z ptr = int(self.queue_ptr) 2023-05-06T18:11:02.1830109Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 432, in catch_errors 2023-05-06T18:11:02.1830960Z return hijacked_callback(frame, cache_size, hooks, frame_state) 2023-05-06T18:11:02.1832135Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 519, in _convert_frame 2023-05-06T18:11:02.1832820Z result = inner_convert(frame, cache_size, hooks, frame_state) 2023-05-06T18:11:02.1833726Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 122, in _fn 2023-05-06T18:11:02.1834325Z return fn(*args, **kwargs) 2023-05-06T18:11:02.1835041Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 355, in _convert_frame_assert 2023-05-06T18:11:02.1835632Z return _compile( 2023-05-06T18:11:02.1836142Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T18:11:02.1836915Z r = func(*args, **kwargs) 2023-05-06T18:11:02.1837565Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 425, in _compile 2023-05-06T18:11:02.1837947Z out_code = transform_code_object(code, transform) 2023-05-06T18:11:02.1838523Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1000, in transform_code_object 2023-05-06T18:11:02.1839260Z transformations(instructions, code_options) 2023-05-06T18:11:02.1839880Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 410, in transform 2023-05-06T18:11:02.1840195Z tracer.run() 2023-05-06T18:11:02.1840663Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2010, in run 2023-05-06T18:11:02.1840984Z super().run() 2023-05-06T18:11:02.1841439Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 703, in run 2023-05-06T18:11:02.1841765Z and self.step() 2023-05-06T18:11:02.1842238Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 663, in step 2023-05-06T18:11:02.1842663Z getattr(self, inst.opname)(inst) 2023-05-06T18:11:02.1843270Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2098, in RETURN_VALUE 2023-05-06T18:11:02.1843647Z self.output.compile_subgraph( 2023-05-06T18:11:02.1844167Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 736, in compile_subgraph 2023-05-06T18:11:02.1844575Z self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) 2023-05-06T18:11:02.1844948Z File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner 2023-05-06T18:11:02.1845248Z return func(*args, **kwds) 2023-05-06T18:11:02.1845810Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 813, in compile_and_call_fx_graph 2023-05-06T18:11:02.1846234Z compiled_fn = self.call_user_compiler(gm) 2023-05-06T18:11:02.1846756Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T18:11:02.1847097Z r = func(*args, **kwargs) 2023-05-06T18:11:02.1847600Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 872, in call_user_compiler 2023-05-06T18:11:02.1848233Z raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( 2023-05-06T18:11:02.1848808Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 868, in call_user_compiler 2023-05-06T18:11:02.1849210Z compiled_fn = compiler_fn(gm, self.example_inputs()) 2023-05-06T18:11:02.1849824Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 206, in compile_fn 2023-05-06T18:11:02.1850323Z return self.backend_compile_fn(gm, example_inputs) 2023-05-06T18:11:02.1850878Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 108, in debug_wrapper 2023-05-06T18:11:02.1851261Z compiled_gm = compiler_fn(gm, example_inputs) 2023-05-06T18:11:02.1851942Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/inductor.py", line 9, in inductor 2023-05-06T18:11:02.1852308Z return compile_fx(*args, **kwargs) 2023-05-06T18:11:02.1852817Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 728, in compile_fx 2023-05-06T18:11:02.1853155Z return aot_autograd( 2023-05-06T18:11:02.1853630Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 56, in compiler_fn 2023-05-06T18:11:02.1854027Z cg = aot_module_simplified(gm, example_inputs, **kwargs) 2023-05-06T18:11:02.1854591Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3334, in aot_module_simplified 2023-05-06T18:11:02.1854971Z compiled_fn = create_aot_dispatcher_function( 2023-05-06T18:11:02.1855482Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 177, in time_wrapper 2023-05-06T18:11:02.1855811Z r = func(*args, **kwargs) 2023-05-06T18:11:02.1856436Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 2959, in create_aot_dispatcher_function 2023-05-06T18:11:02.1856857Z fw_metadata = run_functionalized_fw_and_collect_metadata( 2023-05-06T18:11:02.1857386Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 719, in inner 2023-05-06T18:11:02.1857727Z flat_f_outs = f(*flat_f_args) 2023-05-06T18:11:02.1858221Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 3259, in functional_call 2023-05-06T18:11:02.1858616Z out = Interpreter(mod).run(*args[params_len:], **kwargs) 2023-05-06T18:11:02.1859115Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 138, in run 2023-05-06T18:11:02.1859458Z self.env[node] = self.run_node(node) 2023-05-06T18:11:02.1859933Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 195, in run_node 2023-05-06T18:11:02.1860306Z return getattr(self, n.op)(n.target, args, kwargs) 2023-05-06T18:11:02.1860818Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/interpreter.py", line 267, in call_function 2023-05-06T18:11:02.1861151Z return target(*args, **kwargs) 2023-05-06T18:11:02.1861652Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/overrides.py", line 22, in __torch_function__ 2023-05-06T18:11:02.1862025Z return replace_fn(func)(*args, **kwargs) 2023-05-06T18:11:02.1862464Z torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised: 2023-05-06T18:11:02.1862907Z TypeError: can't assign a SymInt to a torch.cuda.LongTensor 2023-05-06T18:11:02.1863104Z 2023-05-06T18:11:02.1863416Z While executing %setitem_1 : [#users=0] = call_function[target=operator.setitem](args = (%l__self___queue_ptr, 0, %mod_1), kwargs = {}) 2023-05-06T18:11:02.1863888Z Original traceback: 2023-05-06T18:11:02.1864265Z File "/var/lib/jenkins/workspace/torchbench/torchbenchmark/models/moco/moco/builder.py", line 66, in 2023-05-06T18:11:02.1864811Z self.queue_ptr[0] = ptr 2023-05-06T18:11:02.1864959Z 2023-05-06T18:11:02.1864966Z 2023-05-06T18:11:02.1864972Z 2023-05-06T18:11:02.1865135Z You can suppress this exception and fall back to eager by setting: 2023-05-06T18:11:02.1865425Z import torch._dynamo 2023-05-06T18:11:02.1865685Z torch._dynamo.config.suppress_errors = True 2023-05-06T18:11:02.1865865Z 2023-05-06T18:11:05.9989958Z ERROR 2023-05-06T18:11:17.3929065Z cuda train nvidia_deeprecommender [2023-05-06 18:11:17,391] torch._inductor.utils: [WARNING] using triton random, expect difference from eager 2023-05-06T18:11:28.3293326Z 0.998x 2023-05-06T18:11:33.0124908Z Eager model failed to run 2023-05-06T18:11:33.0133628Z Traceback (most recent call last): 2023-05-06T18:11:33.0137424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1246, in validate_model 2023-05-06T18:11:33.0137838Z self.model_iter_fn(model, example_inputs) 2023-05-06T18:11:33.0138233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 398, in forward_and_backward_pass 2023-05-06T18:11:33.0138604Z pred = mod(*cloned_inputs) 2023-05-06T18:11:33.0140247Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T18:11:33.0140631Z return self._call_impl(*args, **kwargs) 2023-05-06T18:11:33.0141142Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T18:11:33.0141489Z return forward_call(*args, **kwargs) 2023-05-06T18:11:33.0143602Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 148, in forward 2023-05-06T18:11:33.0144302Z return self._module(*args, **kwargs) 2023-05-06T18:11:33.0145403Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T18:11:33.0146085Z return self._call_impl(*args, **kwargs) 2023-05-06T18:11:33.0147038Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl 2023-05-06T18:11:33.0147685Z return forward_call(*args, **kwargs) 2023-05-06T18:11:33.0148607Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 285, in forward 2023-05-06T18:11:33.0149226Z return self._forward_impl(x) 2023-05-06T18:11:33.0150149Z File "/var/lib/jenkins/.local/lib/python3.10/site-packages/torchvision/models/resnet.py", line 268, in _forward_impl 2023-05-06T18:11:33.0150757Z x = self.conv1(x) 2023-05-06T18:11:33.0151604Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl 2023-05-06T18:11:33.0152276Z return self._call_impl(*args, **kwargs) 2023-05-06T18:11:33.0153202Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1557, in _call_impl 2023-05-06T18:11:33.0153865Z hook_result = hook(self, args, result) 2023-05-06T18:11:33.0154732Z File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/opacus/grad_sample/grad_sample_module.py", line 288, in capture_activations_hook 2023-05-06T18:11:33.0155531Z p._forward_counter += 1 2023-05-06T18:11:33.0156181Z AttributeError: 'Parameter' object has no attribute '_forward_counter' 2023-05-06T18:11:33.0156499Z 2023-05-06T18:11:33.0156957Z The above exception was the direct cause of the following exception: 2023-05-06T18:11:33.0157256Z 2023-05-06T18:11:33.0157446Z Traceback (most recent call last): 2023-05-06T18:11:33.0157939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 2507, in run 2023-05-06T18:11:33.0158532Z ) = runner.load_model(device, model_name, batch_size=batch_size) 2023-05-06T18:11:33.0159169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/torchbench.py", line 340, in load_model 2023-05-06T18:11:33.0159736Z self.validate_model(model, example_inputs) 2023-05-06T18:11:33.0160779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/common.py", line 1248, in validate_model 2023-05-06T18:11:33.0161409Z raise NotImplementedError("Eager model failed to run") from e 2023-05-06T18:11:33.0161945Z NotImplementedError: Eager model failed to run 2023-05-06T18:11:33.0162219Z 2023-05-06T18:11:33.0162384Z WARNING:root:opacus_cifar10 failed to load 2023-05-06T18:12:53.4880340Z cuda train phlippe_densenet [2023-05-06 18:12:53,485] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 10*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T18:13:33.5896359Z 0.986x 2023-05-06T18:14:00.7286748Z cuda train phlippe_resnet [2023-05-06 18:14:00,726] torch.fx.experimental.symbolic_shapes: [WARNING] Ignored guard 10*s0 < 2147483648 == True, this could result in accuracy problems 2023-05-06T18:14:17.2529115Z 1.011x 2023-05-06T18:14:18.5320397Z abs_latency gmean=0.00x mean=36.787x 2023-05-06T18:14:18.5324239Z compilation_latency mean=81.394 seconds 2023-05-06T18:14:18.5324919Z compression_ratio mean=0.734x 2023-05-06T18:14:18.5325729Z eager_peak_mem gmean=0.00x mean=4.279x 2023-05-06T18:14:18.5326037Z dynamo_peak_mem gmean=0.00x mean=3.848x 2023-05-06T18:14:18.5326572Z calls_captured gmean=0.00x mean=594.000x 2023-05-06T18:14:18.5329445Z unique_graphs gmean=0.00x mean=3.909x 2023-05-06T18:14:18.5331753Z graph_breaks gmean=0.00x mean=12.455x 2023-05-06T18:14:18.5332642Z unique_graph_breaks gmean=0.00x mean=6.364x 2023-05-06T18:14:19.0998317Z + [[ training == \i\n\f\e\r\e\n\c\e ]] 2023-05-06T18:14:19.1139606Z ##[group]Run cat test/**/*.log || true 2023-05-06T18:14:19.1139952Z cat test/**/*.log || true 2023-05-06T18:14:19.1159339Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:19.1159598Z env: 2023-05-06T18:14:19.1159863Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:19.1160102Z GPU_FLAG: --gpus all 2023-05-06T18:14:19.1160434Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:19.1160756Z ##[endgroup] 2023-05-06T18:14:19.1220173Z cat: 'test/**/*.log': No such file or directory 2023-05-06T18:14:19.1249625Z Prepare all required actions 2023-05-06T18:14:19.1271941Z ##[group]Run ./.github/actions/get-workflow-job-id 2023-05-06T18:14:19.1272202Z with: 2023-05-06T18:14:19.1272991Z github-token: *** 2023-05-06T18:14:19.1273184Z env: 2023-05-06T18:14:19.1273390Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:19.1273626Z GPU_FLAG: --gpus all 2023-05-06T18:14:19.1273942Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:19.1274254Z ##[endgroup] 2023-05-06T18:14:19.1290540Z ##[group]Run set -eux 2023-05-06T18:14:19.1290786Z set -eux 2023-05-06T18:14:19.1291137Z GHA_WORKFLOW_JOB_ID=$(python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}") 2023-05-06T18:14:19.1291535Z echo "job-id=${GHA_WORKFLOW_JOB_ID}" >> "${GITHUB_OUTPUT}" 2023-05-06T18:14:19.1308580Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:19.1308852Z env: 2023-05-06T18:14:19.1309126Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:19.1309349Z GPU_FLAG: --gpus all 2023-05-06T18:14:19.1309683Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:19.1310184Z GITHUB_TOKEN: *** 2023-05-06T18:14:19.1310426Z ##[endgroup] 2023-05-06T18:14:19.1349434Z ++ python3 .github/scripts/get_workflow_job_id.py 4900301301 gh-ci-gcp-a100-11 2023-05-06T18:14:19.7468580Z + GHA_WORKFLOW_JOB_ID=13280812757 2023-05-06T18:14:19.7469165Z + echo job-id=13280812757 2023-05-06T18:14:19.7500053Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2023-05-06T18:14:19.7500362Z kill "$MONITOR_SCRIPT_PID" 2023-05-06T18:14:19.7518881Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:19.7519139Z env: 2023-05-06T18:14:19.7519355Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:19.7519829Z GPU_FLAG: --gpus all 2023-05-06T18:14:19.7520147Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:19.7520487Z MONITOR_SCRIPT_PID: 730439 2023-05-06T18:14:19.7520716Z ##[endgroup] 2023-05-06T18:14:19.7664308Z Prepare all required actions 2023-05-06T18:14:19.7664680Z Getting action download info 2023-05-06T18:14:19.9348146Z Download action repository 'actions/upload-artifact@v3' (SHA:0b7f8abb1508181956e8e162db84b466c27e18ce) 2023-05-06T18:14:20.3069636Z ##[group]Run ./.github/actions/upload-test-artifacts 2023-05-06T18:14:20.3069900Z with: 2023-05-06T18:14:20.3070209Z file-suffix: test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757 2023-05-06T18:14:20.3070578Z use-gha: anything-non-empty-to-use-gha 2023-05-06T18:14:20.3070812Z env: 2023-05-06T18:14:20.3071014Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:20.3071247Z GPU_FLAG: --gpus all 2023-05-06T18:14:20.3071571Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:20.3071912Z ##[endgroup] 2023-05-06T18:14:20.3134074Z ##[group]Run actions/upload-artifact@v3 2023-05-06T18:14:20.3134334Z with: 2023-05-06T18:14:20.3134880Z name: test-jsons-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip 2023-05-06T18:14:20.3135311Z retention-days: 14 2023-05-06T18:14:20.3135545Z if-no-files-found: warn 2023-05-06T18:14:20.3135781Z path: test/**/*.json 2023-05-06T18:14:20.3135988Z env: 2023-05-06T18:14:20.3136179Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:20.3136412Z GPU_FLAG: --gpus all 2023-05-06T18:14:20.3136742Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:20.3137044Z ##[endgroup] 2023-05-06T18:14:20.5299039Z With the provided path, there will be 3 files uploaded 2023-05-06T18:14:20.5301988Z Starting artifact upload 2023-05-06T18:14:20.5303115Z For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging 2023-05-06T18:14:20.5303682Z Artifact name is valid! 2023-05-06T18:14:20.7261556Z Container for artifact "test-jsons-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip" successfully created. Starting upload of file(s) 2023-05-06T18:14:21.0623917Z Total size of all the files uploaded is 29105 bytes 2023-05-06T18:14:21.0624304Z File upload process has finished. Finalizing the artifact upload 2023-05-06T18:14:21.2271996Z Artifact has been finalized. All files have been successfully uploaded! 2023-05-06T18:14:21.2272305Z 2023-05-06T18:14:21.2272484Z The raw size of all the files that were specified for upload is 300267 bytes 2023-05-06T18:14:21.2272971Z The size of all the files that were uploaded is 29105 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage 2023-05-06T18:14:21.2273251Z 2023-05-06T18:14:21.2273872Z Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 2023-05-06T18:14:21.2274233Z 2023-05-06T18:14:21.2274710Z Artifact test-jsons-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip has been successfully uploaded! 2023-05-06T18:14:21.2348142Z ##[group]Run actions/upload-artifact@v3 2023-05-06T18:14:21.2348398Z with: 2023-05-06T18:14:21.2348764Z name: test-reports-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip 2023-05-06T18:14:21.2349127Z retention-days: 14 2023-05-06T18:14:21.2349364Z if-no-files-found: ignore 2023-05-06T18:14:21.2349613Z path: test/**/*.xml test/**/*.csv 2023-05-06T18:14:21.2349818Z env: 2023-05-06T18:14:21.2350022Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:21.2350252Z GPU_FLAG: --gpus all 2023-05-06T18:14:21.2350561Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:21.2351102Z ##[endgroup] 2023-05-06T18:14:21.4518968Z With the provided path, there will be 21 files uploaded 2023-05-06T18:14:21.4521695Z Starting artifact upload 2023-05-06T18:14:21.4522969Z For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging 2023-05-06T18:14:21.4523529Z Artifact name is valid! 2023-05-06T18:14:21.5631476Z Container for artifact "test-reports-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip" successfully created. Starting upload of file(s) 2023-05-06T18:14:23.7933186Z Total size of all the files uploaded is 21672 bytes 2023-05-06T18:14:23.7933600Z File upload process has finished. Finalizing the artifact upload 2023-05-06T18:14:23.9434298Z Artifact has been finalized. All files have been successfully uploaded! 2023-05-06T18:14:23.9434550Z 2023-05-06T18:14:23.9434763Z The raw size of all the files that were specified for upload is 42805 bytes 2023-05-06T18:14:23.9435300Z The size of all the files that were uploaded is 21672 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage 2023-05-06T18:14:23.9435678Z 2023-05-06T18:14:23.9440349Z Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 2023-05-06T18:14:23.9440764Z 2023-05-06T18:14:23.9441251Z Artifact test-reports-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip has been successfully uploaded! 2023-05-06T18:14:23.9523520Z ##[group]Run actions/upload-artifact@v3 2023-05-06T18:14:23.9523776Z with: 2023-05-06T18:14:23.9524150Z name: usage-log-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip 2023-05-06T18:14:23.9524508Z retention-days: 14 2023-05-06T18:14:23.9524747Z if-no-files-found: ignore 2023-05-06T18:14:23.9525135Z path: usage_log.txt test/**/*.log 2023-05-06T18:14:23.9525350Z env: 2023-05-06T18:14:23.9525555Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:23.9525791Z GPU_FLAG: --gpus all 2023-05-06T18:14:23.9526123Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:23.9526439Z ##[endgroup] 2023-05-06T18:14:24.1818943Z Multiple search paths detected. Calculating the least common ancestor of all paths 2023-05-06T18:14:24.1821355Z The least common ancestor is /home/weiwangmeta/actions-runner/_work/pytorch/pytorch. This will be the root directory of the artifact 2023-05-06T18:14:24.1822358Z With the provided path, there will be 1 file uploaded 2023-05-06T18:14:24.1825615Z Starting artifact upload 2023-05-06T18:14:24.1826559Z For more detailed logs during the artifact upload process, enable step-debugging: https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging#enabling-step-debug-logging 2023-05-06T18:14:24.1827138Z Artifact name is valid! 2023-05-06T18:14:24.2917332Z Container for artifact "usage-log-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip" successfully created. Starting upload of file(s) 2023-05-06T18:14:26.2396256Z Total size of all the files uploaded is 2172003 bytes 2023-05-06T18:14:26.2396795Z File upload process has finished. Finalizing the artifact upload 2023-05-06T18:14:26.3268784Z Artifact has been finalized. All files have been successfully uploaded! 2023-05-06T18:14:26.3269037Z 2023-05-06T18:14:26.3269215Z The raw size of all the files that were specified for upload is 132737332 bytes 2023-05-06T18:14:26.3269684Z The size of all the files that were uploaded is 2172003 bytes. This takes into account any gzip compression used to reduce the upload size, time and storage 2023-05-06T18:14:26.3269983Z 2023-05-06T18:14:26.3270560Z Note: The size of downloaded zips can differ significantly from the reported size. For more information see: https://github.com/actions/upload-artifact#zipped-artifact-downloads 2023-05-06T18:14:26.3271405Z 2023-05-06T18:14:26.3272098Z Artifact usage-log-runattempt1-test-inductor_torchbench_perf-2-3-linux.gcp.a100.large_13280812757.zip has been successfully uploaded! 2023-05-06T18:14:26.3383662Z ##[group]Run # shellcheck disable=SC2156 2023-05-06T18:14:26.3383973Z # shellcheck disable=SC2156 2023-05-06T18:14:26.3384348Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2023-05-06T18:14:26.3402886Z shell: /usr/bin/bash -e {0} 2023-05-06T18:14:26.3403116Z env: 2023-05-06T18:14:26.3403325Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:26.3403548Z GPU_FLAG: --gpus all 2023-05-06T18:14:26.3403884Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:26.3404196Z ##[endgroup] 2023-05-06T18:14:26.7597890Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2023-05-06T18:14:26.7598216Z with: 2023-05-06T18:14:26.7598388Z env: 2023-05-06T18:14:26.7598597Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:26.7598832Z GPU_FLAG: --gpus all 2023-05-06T18:14:26.7599164Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:26.7599465Z ##[endgroup] 2023-05-06T18:14:26.7614810Z ##[group]Run set -eou pipefail 2023-05-06T18:14:26.7615092Z set -eou pipefail 2023-05-06T18:14:26.7615311Z  2023-05-06T18:14:26.7615598Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2023-05-06T18:14:26.7615889Z for _ in $(seq 1440); do 2023-05-06T18:14:26.7616158Z  # Break if no ssh session exists anymore 2023-05-06T18:14:26.7616419Z  if [ "$(who)" = "" ]; then 2023-05-06T18:14:26.7616626Z  break 2023-05-06T18:14:26.7616877Z  fi 2023-05-06T18:14:26.7617066Z  echo "." 2023-05-06T18:14:26.7617273Z  sleep 5 2023-05-06T18:14:26.7617491Z done 2023-05-06T18:14:26.7635696Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:26.7635963Z env: 2023-05-06T18:14:26.7636175Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:26.7636392Z GPU_FLAG: --gpus all 2023-05-06T18:14:26.7636919Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:26.7637242Z ##[endgroup] 2023-05-06T18:14:26.7673919Z Holding runner for 2 hours until all ssh sessions have logged out 2023-05-06T18:14:26.7721089Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2023-05-06T18:14:26.7721485Z # ignore expansion of "docker ps -q" since it could be empty 2023-05-06T18:14:26.7721791Z # shellcheck disable=SC2046 2023-05-06T18:14:26.7722068Z docker stop $(docker ps -q) || true 2023-05-06T18:14:26.7722349Z # Prune all of the docker images 2023-05-06T18:14:26.7722610Z docker system prune -af 2023-05-06T18:14:26.7739131Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:26.7739396Z env: 2023-05-06T18:14:26.7739593Z GIT_DEFAULT_BRANCH: main 2023-05-06T18:14:26.7739830Z GPU_FLAG: --gpus all 2023-05-06T18:14:26.7740163Z DOCKER_CONTAINER_ID: 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:26.7740465Z ##[endgroup] 2023-05-06T18:14:27.3012809Z 75a0724c4dd3 2023-05-06T18:14:30.1484870Z Deleted Containers: 2023-05-06T18:14:30.1485249Z 75a0724c4dd3bb1259df64662222ca583aeb6cf8222aea13b472f46e35e5e24c 2023-05-06T18:14:30.1485465Z 2023-05-06T18:14:36.3034444Z Deleted Images: 2023-05-06T18:14:36.3035823Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7:17ccb3e70b07f61f36d65de7b3f472733f27d9eb 2023-05-06T18:14:36.3037595Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.8-cudnn8-py3-gcc7@sha256:4a48e0cd0a3dfdf3bf150da9befb0a7efa4b7d57c18bf4a7b3a0cfb6bc87bda9 2023-05-06T18:14:36.3039236Z deleted: sha256:acd0758f11b61d36d5225acf2ba65f831eee2a56287b1812e01ccbd5aec5d3ed 2023-05-06T18:14:36.3040069Z deleted: sha256:a5726e0b4be1f1fe2f3f1eb858708967f4234ea2f7f7ef6f311c54d78de4e49a 2023-05-06T18:14:36.3040889Z deleted: sha256:df9d2d6121a9abc3b3f52e352db8a43085dd175984b240b26de1d06c940e0f2c 2023-05-06T18:14:36.3041592Z deleted: sha256:0fed81a225c6c101fe1a5a34b2e625c13fe4ba4120cb9a2005fda756109c4b78 2023-05-06T18:14:36.3042269Z deleted: sha256:9259a04efbdf34a6e9271a6bf034953d93b6a39276612322a67dac69e5173649 2023-05-06T18:14:36.3042904Z deleted: sha256:b636f47f90aa8ad0f946be8a85f29da4013e7cce7801944f50abf3131a2f724e 2023-05-06T18:14:36.3043454Z deleted: sha256:0f08250134e49e1769f34a89026a696e483ad4107d3c7032b82adfe390357f09 2023-05-06T18:14:36.3043980Z deleted: sha256:1e0b3b849f89c9b3658f1d5bb79706ab9d3d0b3c216cf9a142ff57fe0b17a379 2023-05-06T18:14:36.3044663Z deleted: sha256:334e59cc94a369d7434b23597b1570a43c2c5e4bc140c6fdde0140db5fbf099b 2023-05-06T18:14:36.3045070Z deleted: sha256:32aff7a3ee8d19103b7af7e91fb9f2b4151a95aa046284cf4b16c7c6a264caea 2023-05-06T18:14:36.3045506Z deleted: sha256:310a1818ee55f718220fc04f00464d2c6a27fe546db5ccd20ef4d01e41bfd1dc 2023-05-06T18:14:36.3045878Z deleted: sha256:20a0461d67fa36dba44c622f41c395389e2628b3f68284adde1a953768e76d55 2023-05-06T18:14:36.3046273Z deleted: sha256:cba4d7368021e2d6ac3a7b241641c134973d683bb569889f8de7be9e0fde871e 2023-05-06T18:14:36.3046677Z deleted: sha256:c36da9a3ca2ac6e7dcc0018139665a4ec8b82771fe2f3d895b36cf93f79d5b29 2023-05-06T18:14:36.3047102Z deleted: sha256:a6406ba21466d5275bfac5d363a3263199c06d993a8bfcb6a957564dedd4c6e9 2023-05-06T18:14:36.3047484Z deleted: sha256:51e48308a5ed1980845d7492e28a4eda9c6b92fced0b9a625cec9d228df17651 2023-05-06T18:14:36.3047868Z deleted: sha256:d45f8f6c568e3d43e292458a7852555e40810fb32a88fe57781df91390fca7da 2023-05-06T18:14:36.3048270Z deleted: sha256:ab0fb890591e219efda0e15ca4135203c277a26dc007dc97ef9dcbafc7ea056a 2023-05-06T18:14:36.3048659Z deleted: sha256:8760f199e68e0cf5ab8be43dc21500ff31926bfd39ed396959c1124afac964c7 2023-05-06T18:14:36.3049063Z deleted: sha256:699c9c8d6af9457aa2cd7cf68d83e05471bb427e794e561aef67471407009f01 2023-05-06T18:14:36.3049457Z deleted: sha256:9601951faaf1f01362df804e62d423d73f295be1f9719581abc4b7bc1938cb51 2023-05-06T18:14:36.3049837Z deleted: sha256:312449f97a856bf655894c640bd952ec581a545e76fcf5f5acd7acc0ca677fe8 2023-05-06T18:14:36.3050220Z deleted: sha256:b93ad526c0fa49e5f122cedfb3fad68631380cf81ccd944401dbf04491af35ab 2023-05-06T18:14:36.3050692Z deleted: sha256:146c41bf477197d1f0a7f4b3c2542ab70e126354dee7f15e35d229f0fa03249c 2023-05-06T18:14:36.3051081Z deleted: sha256:229eb93dab862b7b26e63696e35a0bec3ab4861e68228d977fed9047c762aac8 2023-05-06T18:14:36.3051451Z deleted: sha256:699bb977e4b16d178316c6165b80b410aaa647049177bcefc1a4d51cf4d42e76 2023-05-06T18:14:36.3051899Z deleted: sha256:d57e84bbd75fbe00ccabbc40bb06fb74f76333ef21cf6ba8515362456f2e54cb 2023-05-06T18:14:36.3052538Z deleted: sha256:11680df6038dc1f98a4a15f402ebbebf108c56ca7ca031bc5a7963a2f2f3fb70 2023-05-06T18:14:36.3053150Z deleted: sha256:34358d9763ced53a22438cf78e0311f0cb66376278760726bf0ee86f229cef7f 2023-05-06T18:14:36.3053617Z deleted: sha256:44f393272db6251272a45b217e9070b0f0395111cfbfc176e3b8c0dd44fb6421 2023-05-06T18:14:36.3054009Z deleted: sha256:147c6c2e8e8337bfa0bb3d938f5bc17ad315d5a906ac4e681887f2a1d4b4f276 2023-05-06T18:14:36.3054414Z deleted: sha256:8fbf7b29c725ae10e7b47ed30816c619fe20d3d770162e6c0f0eb89253a1480c 2023-05-06T18:14:36.3054800Z deleted: sha256:990dec6971758e8c91c357e7baa230b9859f0d5bf0c38551d1c844ddf185b2aa 2023-05-06T18:14:36.3055267Z deleted: sha256:532debbc4b65a8e5964bb9b36d3d41ba939a576d6e914326b013008e66122129 2023-05-06T18:14:36.3055919Z deleted: sha256:590470d77c3fbf630bd9013d76016072ab58d7397fd0612874d21f3085b1821e 2023-05-06T18:14:36.3056554Z deleted: sha256:243441d7807076257f437835ea95668e621cc165955af22bc26e67deb2f88eff 2023-05-06T18:14:36.3056936Z deleted: sha256:a1b9061f30e1ea3dcf9078140b07c201ecb53ae7895a1d0d7a415144b589b3c1 2023-05-06T18:14:36.3057351Z deleted: sha256:af366f383d0cc60a87ecf09908b9a375845e9ba53fc7b810c5e46a968bf9f4bc 2023-05-06T18:14:36.3057967Z deleted: sha256:b1d0b2fa8de25706f229ad57a9838ab07ca7bbfb90aa56e25126a5ebe8bbb97a 2023-05-06T18:14:36.3058485Z deleted: sha256:e01c6afe758031201b8e0254d37d0666e4a505edf9da0bd7db4b118236907721 2023-05-06T18:14:36.3059167Z deleted: sha256:bce551244421c75edf0ee670d7523de1b893b9e6d66a7ea2905db066a8c9545f 2023-05-06T18:14:36.3059716Z deleted: sha256:908514ae112022735a7c55b595fe134c784c8a350b14f1d3c35defc1e2299c4d 2023-05-06T18:14:36.3060264Z deleted: sha256:0dd867579514f431e93b8049e078bf63e33b9df200d1caead9819bab09734a66 2023-05-06T18:14:36.3061022Z deleted: sha256:1fde4ec2e7f0fb262129811873ebad0f7bb2f9c6e7ea00e0c3664852547b5672 2023-05-06T18:14:36.3061418Z deleted: sha256:c388b8d79254efab6554a4103679a95693e9125a901483339f57b051a646b774 2023-05-06T18:14:36.3062051Z deleted: sha256:710048e1dd395fc262fc067a28dd5402ab0bd7e24d12745a1430a91755657a4b 2023-05-06T18:14:36.3062596Z deleted: sha256:231f3ba4edc44a8acfed4f1f99108875fe9959d92e7eed7a521011a9d3f28fd9 2023-05-06T18:14:36.3063239Z deleted: sha256:ad6a774879daec0edfe8d0780463fdbaf43d56892d6f10f5ca94db56564162dd 2023-05-06T18:14:36.3063880Z deleted: sha256:d2b91b4dd2408619f7fd3e132bf8760018dfa4db180ba842ac60befb8573a1b9 2023-05-06T18:14:36.3064492Z deleted: sha256:934559fb26e09cb24468f4bfd58525edae9a559e6d2fe3834c498f6d15259c52 2023-05-06T18:14:36.3065086Z deleted: sha256:b0ad64d609aef614e02a6654513f4b81bf86e54ce50aaa5868602f1b7cf701e4 2023-05-06T18:14:36.3065775Z deleted: sha256:de1f6cb86049c8b323d0432bb35a29a68381afd75998dca1767f285dd7df520f 2023-05-06T18:14:36.3066460Z deleted: sha256:301d11b0260cd13c1e8afb96d06afc712515289ddc39871a14085de2bb49d6ab 2023-05-06T18:14:36.3067128Z deleted: sha256:8e95054dfb509d409882a82cfcc0bfe2ea4339140e44896458edac3e45ab929b 2023-05-06T18:14:36.3067848Z deleted: sha256:2fc2bc7c31f8ceea058a5e75a0377f60c149a472c15ebe934609b7e16b55ee81 2023-05-06T18:14:36.3068532Z deleted: sha256:cce3676545a210505b0357d897596ccf8d85b96712234091eaccf16f22601097 2023-05-06T18:14:36.3069163Z deleted: sha256:657cb781c68b81f40689589feb374e7c5de1b6b0536576f4eea957959d34965c 2023-05-06T18:14:36.3069819Z deleted: sha256:5c9c3cd81137d01bdc5c3e15b39bc4d8fcb7f06d22b72457603383102e30c624 2023-05-06T18:14:36.3070526Z deleted: sha256:1d305f5e218efac21c446dd2ec988a231c41a5fee549dacd670ffab8865c07a9 2023-05-06T18:14:36.3071330Z deleted: sha256:64c2ec5de3c995bd24957b1ad47dc950e83d79a6ee260e99f5a4d489f118038a 2023-05-06T18:14:36.3072006Z deleted: sha256:475a54c2a93de61ab1a000184b41b5c5370eef3842486f6c185cd9a001ff1a92 2023-05-06T18:14:36.3072382Z 2023-05-06T18:14:36.3073055Z Total reclaimed space: 26.62GB 2023-05-06T18:14:36.3151720Z Post job cleanup. 2023-05-06T18:14:36.3187680Z Post job cleanup. 2023-05-06T18:14:36.4461042Z Unexpected error attempting to determine if executable file exists '/home/weiwangmeta/.local/bin/git': Error: EACCES: permission denied, stat '/home/weiwangmeta/.local/bin/git' 2023-05-06T18:14:36.4475091Z Unexpected error attempting to determine if executable file exists '/home/weiwangmeta/.local/bin/git': Error: EACCES: permission denied, stat '/home/weiwangmeta/.local/bin/git' 2023-05-06T18:14:36.4497717Z [command]/usr/bin/git version 2023-05-06T18:14:36.4554066Z git version 2.25.1 2023-05-06T18:14:36.4610572Z Temporarily overriding HOME='/home/weiwangmeta/actions-runner/_work/_temp/a5034012-fca1-4ee2-9c3e-0fc04af4905b' before making global git config changes 2023-05-06T18:14:36.4611446Z Adding repository directory to the temporary git global config as a safe directory 2023-05-06T18:14:36.4618564Z [command]/usr/bin/git config --global --add safe.directory /home/weiwangmeta/actions-runner/_work/pytorch/pytorch 2023-05-06T18:14:36.4668715Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2023-05-06T18:14:36.4708295Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || : 2023-05-06T18:14:36.4999023Z Entering 'android/libs/fbjni' 2023-05-06T18:14:36.5038871Z Entering 'third_party/FP16' 2023-05-06T18:14:36.5077637Z Entering 'third_party/FXdiv' 2023-05-06T18:14:36.5116308Z Entering 'third_party/NNPACK' 2023-05-06T18:14:36.5155562Z Entering 'third_party/QNNPACK' 2023-05-06T18:14:36.5195221Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T18:14:36.5233720Z Entering 'third_party/XNNPACK' 2023-05-06T18:14:36.5288284Z Entering 'third_party/benchmark' 2023-05-06T18:14:36.5327363Z Entering 'third_party/cpuinfo' 2023-05-06T18:14:36.5366055Z Entering 'third_party/cub' 2023-05-06T18:14:36.5404467Z Entering 'third_party/cudnn_frontend' 2023-05-06T18:14:36.5448613Z Entering 'third_party/cutlass' 2023-05-06T18:14:36.5495117Z Entering 'third_party/eigen' 2023-05-06T18:14:36.5535638Z Entering 'third_party/fbgemm' 2023-05-06T18:14:36.5575248Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T18:14:36.5613719Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T18:14:36.5653431Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T18:14:36.5699278Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T18:14:36.5736831Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T18:14:36.5776329Z Entering 'third_party/flatbuffers' 2023-05-06T18:14:36.5817144Z Entering 'third_party/fmt' 2023-05-06T18:14:36.5856211Z Entering 'third_party/foxi' 2023-05-06T18:14:36.5894912Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T18:14:36.5933970Z Entering 'third_party/gloo' 2023-05-06T18:14:36.5972754Z Entering 'third_party/googletest' 2023-05-06T18:14:36.6011829Z Entering 'third_party/ideep' 2023-05-06T18:14:36.6048846Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T18:14:36.6094845Z Entering 'third_party/ios-cmake' 2023-05-06T18:14:36.6133674Z Entering 'third_party/ittapi' 2023-05-06T18:14:36.6172370Z Entering 'third_party/kineto' 2023-05-06T18:14:36.6210438Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T18:14:36.6248828Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T18:14:36.6287848Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T18:14:36.6327306Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T18:14:36.6365957Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T18:14:36.6402596Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T18:14:36.6442595Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T18:14:36.6480498Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T18:14:36.6517428Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T18:14:36.6556240Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T18:14:36.6596960Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T18:14:36.6633875Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T18:14:36.6676302Z Entering 'third_party/nccl/nccl' 2023-05-06T18:14:36.6715802Z Entering 'third_party/neon2sse' 2023-05-06T18:14:36.6753392Z Entering 'third_party/nlohmann' 2023-05-06T18:14:36.6794010Z Entering 'third_party/onnx' 2023-05-06T18:14:36.6848719Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T18:14:36.6887802Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T18:14:36.6928019Z Entering 'third_party/onnx-tensorrt' 2023-05-06T18:14:36.6967056Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T18:14:36.7010220Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T18:14:36.7051212Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T18:14:36.7089515Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T18:14:36.7136020Z Entering 'third_party/pocketfft' 2023-05-06T18:14:36.7174467Z Entering 'third_party/protobuf' 2023-05-06T18:14:36.7216927Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T18:14:36.7254378Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T18:14:36.7294921Z Entering 'third_party/psimd' 2023-05-06T18:14:36.7333906Z Entering 'third_party/pthreadpool' 2023-05-06T18:14:36.7373357Z Entering 'third_party/pybind11' 2023-05-06T18:14:36.7411771Z Entering 'third_party/python-enum' 2023-05-06T18:14:36.7449963Z Entering 'third_party/python-peachpy' 2023-05-06T18:14:36.7489367Z Entering 'third_party/python-six' 2023-05-06T18:14:36.7527217Z Entering 'third_party/sleef' 2023-05-06T18:14:36.7565945Z Entering 'third_party/tbb' 2023-05-06T18:14:36.7607217Z Entering 'third_party/tensorpipe' 2023-05-06T18:14:36.7645773Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T18:14:36.7683629Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T18:14:36.7721179Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T18:14:36.7758214Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T18:14:36.7795294Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T18:14:36.7836936Z Entering 'third_party/zstd' 2023-05-06T18:14:36.7893423Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2023-05-06T18:14:36.7930413Z http.https://github.com/.extraheader 2023-05-06T18:14:36.7941143Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2023-05-06T18:14:36.7987187Z [command]/usr/bin/git submodule foreach --recursive git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || : 2023-05-06T18:14:36.8264289Z Entering 'android/libs/fbjni' 2023-05-06T18:14:36.8284840Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8322235Z Entering 'third_party/FP16' 2023-05-06T18:14:36.8342457Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8378265Z Entering 'third_party/FXdiv' 2023-05-06T18:14:36.8400045Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8434074Z Entering 'third_party/NNPACK' 2023-05-06T18:14:36.8455264Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8490819Z Entering 'third_party/QNNPACK' 2023-05-06T18:14:36.8511267Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8545875Z Entering 'third_party/VulkanMemoryAllocator' 2023-05-06T18:14:36.8565498Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8601458Z Entering 'third_party/XNNPACK' 2023-05-06T18:14:36.8621721Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8673371Z Entering 'third_party/benchmark' 2023-05-06T18:14:36.8693558Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8729574Z Entering 'third_party/cpuinfo' 2023-05-06T18:14:36.8749316Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8785700Z Entering 'third_party/cub' 2023-05-06T18:14:36.8806348Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8842855Z Entering 'third_party/cudnn_frontend' 2023-05-06T18:14:36.8862509Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8902679Z Entering 'third_party/cutlass' 2023-05-06T18:14:36.8922857Z http.https://github.com/.extraheader 2023-05-06T18:14:36.8967881Z Entering 'third_party/eigen' 2023-05-06T18:14:36.8988014Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9025199Z Entering 'third_party/fbgemm' 2023-05-06T18:14:36.9046332Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9081810Z Entering 'third_party/fbgemm/third_party/asmjit' 2023-05-06T18:14:36.9101567Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9136975Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2023-05-06T18:14:36.9156970Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9191706Z Entering 'third_party/fbgemm/third_party/cutlass' 2023-05-06T18:14:36.9211013Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9254193Z Entering 'third_party/fbgemm/third_party/googletest' 2023-05-06T18:14:36.9273790Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9308552Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2023-05-06T18:14:36.9327630Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9365065Z Entering 'third_party/flatbuffers' 2023-05-06T18:14:36.9385238Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9422773Z Entering 'third_party/fmt' 2023-05-06T18:14:36.9443336Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9479370Z Entering 'third_party/foxi' 2023-05-06T18:14:36.9499364Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9533955Z Entering 'third_party/gemmlowp/gemmlowp' 2023-05-06T18:14:36.9553772Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9588061Z Entering 'third_party/gloo' 2023-05-06T18:14:36.9609239Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9644488Z Entering 'third_party/googletest' 2023-05-06T18:14:36.9664516Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9699265Z Entering 'third_party/ideep' 2023-05-06T18:14:36.9719666Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9753246Z Entering 'third_party/ideep/mkl-dnn' 2023-05-06T18:14:36.9772593Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9818691Z Entering 'third_party/ios-cmake' 2023-05-06T18:14:36.9839126Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9873683Z Entering 'third_party/ittapi' 2023-05-06T18:14:36.9894189Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9929354Z Entering 'third_party/kineto' 2023-05-06T18:14:36.9949442Z http.https://github.com/.extraheader 2023-05-06T18:14:36.9984084Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2023-05-06T18:14:37.0004880Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0039522Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2023-05-06T18:14:37.0058921Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0096072Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2023-05-06T18:14:37.0116168Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0152710Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2023-05-06T18:14:37.0172917Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0208947Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2023-05-06T18:14:37.0229191Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0263395Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2023-05-06T18:14:37.0283489Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0321129Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2023-05-06T18:14:37.0340571Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0376836Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2023-05-06T18:14:37.0395760Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0431002Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2023-05-06T18:14:37.0450780Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0486231Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2023-05-06T18:14:37.0505708Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0542877Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2023-05-06T18:14:37.0562050Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0597352Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2023-05-06T18:14:37.0616708Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0652815Z Entering 'third_party/nccl/nccl' 2023-05-06T18:14:37.0673594Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0708506Z Entering 'third_party/neon2sse' 2023-05-06T18:14:37.0729851Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0764286Z Entering 'third_party/nlohmann' 2023-05-06T18:14:37.0784832Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0821055Z Entering 'third_party/onnx' 2023-05-06T18:14:37.0841425Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0892616Z Entering 'third_party/onnx/third_party/benchmark' 2023-05-06T18:14:37.0912718Z http.https://github.com/.extraheader 2023-05-06T18:14:37.0947378Z Entering 'third_party/onnx/third_party/pybind11' 2023-05-06T18:14:37.0967470Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1005015Z Entering 'third_party/onnx-tensorrt' 2023-05-06T18:14:37.1027836Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1060838Z Entering 'third_party/onnx-tensorrt/third_party/onnx' 2023-05-06T18:14:37.1080699Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1121239Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/benchmark' 2023-05-06T18:14:37.1140346Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1175815Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11' 2023-05-06T18:14:37.1196231Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1230558Z Entering 'third_party/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang' 2023-05-06T18:14:37.1251311Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1292740Z Entering 'third_party/pocketfft' 2023-05-06T18:14:37.1313293Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1347502Z Entering 'third_party/protobuf' 2023-05-06T18:14:37.1368269Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1406335Z Entering 'third_party/protobuf/third_party/benchmark' 2023-05-06T18:14:37.1426351Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1460870Z Entering 'third_party/protobuf/third_party/googletest' 2023-05-06T18:14:37.1481158Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1518296Z Entering 'third_party/psimd' 2023-05-06T18:14:37.1537679Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1572775Z Entering 'third_party/pthreadpool' 2023-05-06T18:14:37.1592490Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1626297Z Entering 'third_party/pybind11' 2023-05-06T18:14:37.1647096Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1681961Z Entering 'third_party/python-enum' 2023-05-06T18:14:37.1703057Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1737564Z Entering 'third_party/python-peachpy' 2023-05-06T18:14:37.1758573Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1792554Z Entering 'third_party/python-six' 2023-05-06T18:14:37.1813553Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1847135Z Entering 'third_party/sleef' 2023-05-06T18:14:37.1867566Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1902363Z Entering 'third_party/tbb' 2023-05-06T18:14:37.1923666Z http.https://github.com/.extraheader 2023-05-06T18:14:37.1961329Z Entering 'third_party/tensorpipe' 2023-05-06T18:14:37.1981429Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2014013Z Entering 'third_party/tensorpipe/third_party/googletest' 2023-05-06T18:14:37.2033496Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2067606Z Entering 'third_party/tensorpipe/third_party/libnop' 2023-05-06T18:14:37.2087151Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2119975Z Entering 'third_party/tensorpipe/third_party/libuv' 2023-05-06T18:14:37.2138600Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2174058Z Entering 'third_party/tensorpipe/third_party/pybind11' 2023-05-06T18:14:37.2193293Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2226207Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2023-05-06T18:14:37.2246545Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2286867Z Entering 'third_party/zstd' 2023-05-06T18:14:37.2307187Z http.https://github.com/.extraheader 2023-05-06T18:14:37.2469785Z A job completed hook has been configured by the self-hosted runner administrator 2023-05-06T18:14:37.2497197Z ##[group]Run '/home/weiwangmeta/post-job.sh' 2023-05-06T18:14:37.2513941Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2023-05-06T18:14:37.2514244Z ##[endgroup] 2023-05-06T18:14:37.2552255Z The approximate cost for this shard is shown below (in $): 2023-05-06T18:14:37.2579603Z 32.613 2023-05-06T18:14:37.2891670Z Cleaning up orphan processes